Learning About Privilege Makes Liberals Look More Conservative

 

Not a good representation of poverty when people usually don’t use cash anymore

Why are poor people poor? Your answer to that question determines a lot about your feelings and response towards them. If you think people are poor because they’re good social investments who happen to be experiencing a patch of bad luck outside of their control – in other words, that their poverty isn’t really their fault – your interest in seeing that they receive assistance increases. (http://popsych.org/who-deserves-healthcare-and-unemployment-benefits/). On the other hand, if people are perceived to be poor because of undesirable personality traits – like laziness – and their poverty is their own fault, then people are less interested in providing them with assistance. (http://popsych.org/socially-strategic-welfare/) This makes sense in light of the prospect that people don’t’ help others simply because those other people need help. A psychological mechanism that encouraged its bearer to aid others at a personal cost wouldn’t do much to help its bearer succeed on the evolutionary stage unless those personal costs were later recouped. You help them at time A so that you get something in return at time B that outweighs the initial helping costs. If you’re helping someone who needs help because they’re lazy, it’s less likely they’re going to suddenly find motivation to help you later than if you helped someone who’s just unlucky.

God helps those who can help him later

The extent to which people differ in their desire to help the poor, then, likely varies with the attributions they make for poverty: If people largely believe poverty isn’t the fault of the poor, they will favor helping the poor more broadly, while those who believe poverty is the fault of the poor will disfavor helping them, in general. This divide should go a long way to explaining why, in the US, Liberals tend to favor social programs for helping the poor more than conservatives. Indeed, that precise pattern popped up in a recent paper by Cooley et al (2019) when participants read the following description of a made-up poor person:

Kevin, a[n]…American living in New York City, would say his life has been defined by poverty. As a child, Kevin was raised by a single mom who struggled to balance several part-time jobs simply to pay the bills. Most winters, they had no heat; and, it was a daily question whether they would have enough to eat. In late 2016, Kevin began to receive welfare assistance. Since then, he has not applied for any jobs and instead has cycled between jail cells, shelters, emergency rooms and the streets. Although Kevin would like to be financially independent, he doesn’t feel he has the skills or ability to obtain a well-paying job.

The results found that as political liberalism increased in people, they tended to both report more sympathy for Kevin, as well as making more external attributions for the causes of that poverty. Liberals were more interested in helping because they blamed Kevin less for his circumstances.

If you fancy yourself a liberal, take this time to pat yourself on the back for caring about Kevin’s plight. Good for you. If you fancy yourself a conservative, you can also take this time to pat yourself on the back for your realism about why Kevin is poor.

Now if that was all there was to this study, there might not be too much to talk about. However, the focus of this paper was more specific than general attitudes about poverty and political affiliation. Instead, the authors also looked at Kevin’s race: What happens when Kevin is described as White or Black in that opening sentence? As it turns out…nothing. While both liberals and conservatives were modestly more sympathetic towards a Black Kevin’s plight, these differences weren’t significant. Race didn’t seem to enter the equation when people were looking at this specific example of a poor person. That should be a good thing, I would think; people where judging Kevin as Kevin, rather than as a proxy for his entire race.

Again, if that’s all there was to this study, there might still not be much to talk about. It’s in the final twist of the experiment that brings it all home: how do people respond to a white/black Kevin after reading a bit about white privilege?

See how everyone’s angry here? That’s called foreshadowing

The experiment (number 2 in the paper) went as follows: 650 Participants would begin by reading a story. This story was either about the importance of a daily routine (the neutral control condition) or about white privilege (the experimental condition). Specifically, the privilege story read:

In America, there is a long history of White people having more power than other racial groups (e.g., Black people). Although many people think of racial inequality as decreasing, there are still privileges that are experienced by White Americans that are not true for other racial groups. For example, in her essay “White Privilege: Unpacking the Invisible Knapsack” Peggy McIntosh, PhD, lists different privileges that she experiences as a White person living in America.

Four specific examples were provided, including being able to be in the company of people of your same race most of the time, see your race widely and positively presented in the media, not being asked to speak on behalf of your racial group, and not having your race work against you if you need legal or medical help.

Once participants had read that story, they were then presented with the Kevin story from above, asked to respond about how much sympathy they felt for him and how much they blamed him for his situation before finally completing some demographic measures. This allowed the authors to probe what effect this brief discussion of white privilege had on people’s responses.

As it turned out, the conservatives didn’t seem to take much away from that brief lesson on privilege: On a scale of 0 (strongly disagree) to 100 (strongly agree), Conservatives reported an equal amount of sympathy for Kevin whether he was white or black (M = 59 for white and M = 61 for Black). As these numbers mirrored the values reported for Conservatives in the control condition well, we could conclude that Conservatives didn’t seem to care about the privilege talk.

The liberals were listening, on the other hand. In the experiment condition, they reported more sympathy for the Black Kevin (M = 76) than the white one (M = 60). So liberals and conservatives seemed to “agree” about how much sympathy white Kevin deserved, while liberals cared more about black Kevin. Does that mean the privilege lesson made liberals care about Black Kevin more? Not at all. When examining the control condition, the most interesting finding was made clear: when they were simply reading about routines, liberals cared as much about White Kevin (M = 71) as Black Kevin (M = 74). Comparing the numbers from the control and experimental groups, we see the following pattern of results emerge: when not thinking about white privilege, liberals cared more about poor people than conservatives and neither seemed to care about race. When white privilege was added to the equation, the only difference that emerged is that only liberals started to care less about White Kevin and blamed him more for this problems without showing any increase in care for Black Kevin.

“At least I didn’t help that poor white guy, which makes me a good person”

In sum, it looked like briefly reading about white privilege made liberals more conservative in their responses towards poor white people. It was a purely negative effect, with no apparent benefits for poor black people. Conservatives, on the other hand, remained consistent, suggesting the privilege talks weren’t doing any good there either. While it is only speculative, it is not hard to imagine how these effects might carry into other domains – like gender – or how they might be made more extreme when the discussion of white privilege isn’t limited to a short passage but instead begins to take up increasingly larger portions of social discourse. If this results in less care for certain groups without a corresponding increase in care for others, it should be a cause for concern to anyone interested in seeing poverty addressed effectively. It might also be a concern if your interest is in treating people as individuals, instead of as proxies for an entire group of people

References: Cooley, E., Brown-Iannuzzi, J., Lei, R., & Cipolli, W. (2019). Complex Intersections of Race and Class: Among Social Liberals, Learning About White Privilege Reduces Sympathy, Increases Blame, and Decreases External Attributions for White People Struggling With Poverty. Journal of Experimental Psychology: Generalhttp://dx.doi.org/10.1037/xge0000605

 

Money For Nothing, But The Chicks Aren’t Free

When people see young, attractive women in relationships with older and/or unattractive men, the usual perception that comes to mind is that the relationship revolves around money. This perception is usual because it tends to be accurate: women do, in fact, tend to prefer men who both have access to financial resources and who are willing to share them.  What is rather notable is that the reverse isn’t quite as a common: a young, attractive man shacking up with an older, rich woman just doesn’t call too many examples to mind. Women seem to have a much more pronounced preference for men with wealth than men have for women. While examples of such preferences playing themselves out in real life exist anecdotally, it’s always good to try and showcase their existence empirically.

Early attempts were made by Dr. West, but replications are required

This brings me to a new paper by Arnocky et al (2016) that examined how altruism affects mating success in humans (as this is still psychology research, “humans” translates roughly as “undergraduate psychology majors”, but such is the nature of convenience samples). The researchers first sought (a) to document that more altruistic people really were preferred as mating partners (spoilers: they are), and then (b) to try and explain why we might expect them to be. Let’s begin with what they found, as that much is fairly straightforward. In their first study, Arnocky et al (2016) recruited 192 women and 105 men from a Canadian university and asked them to complete a few self-report measures: an altruism scale (used to measure general dispositions towards providing aid to others when reciprocation is unlikely), a mating success scale (measuring perceptions of how desirable one tends to be towards the opposite sex), their numbers of lifetime sexual partners, as well as the number of those that were short-term, the number of times over the last month they had sex with their current partner (if they had one, which about 40% did), and a measure of their personality more generally.

These measures were then entered into a regression (controlling for personality). When it came to predicting perceived mating success, reported altruism was a significant predictor (ß = 0.25), but neither sex nor the altruism-sex interaction was. This suggests that both men and women tend become more attractive to the opposite sex if they behave more altruistically (or, conversely, that people who are more selfish are less desirable, which sounds quite plausible). However, what it means for one to be successful in the mating domain varies by sex: for men, having more sexual partners usually implies a greater level of success, whereas the same does not hold true for women as often (as gametes are easy to obtain for women, but investment is difficult). In accordance with this point, it was also found that altruism predicted the number of lifetime sexual partners overall (ß = .16), but this effect was specific to men: more altruistic men had more sexual partners (and more casual ones), whereas more altruistic women did not. Finally, within the contexts of existing relationships, altruism also (sort of) predicted the number of times someone had sex with their partner in the last month (ß = .27); while there was not a significant interaction with sex, a visual inspection of the provided graphs suggest that if this effect existed, it was being predominately carried by altruistic women having more sex within a relationship; not the men.

Now that’s all well and good, but the authors wanted to go a little further. In their second study, rather than just asking participants about how altruistic they were, they offered participants the opportunity to be altruistic: after completing the survey, participants could indicate how much (if any) of their earnings they wanted to donate to a charity of their choice. That way, you get what might be a less-biased measure of one’s actual altruism (rather than their own perception of it). Another 335 women and 189 men were recruited for this second phase and, broadly, the results follow the same general pattern, but there were some notable differences. In terms of mating success, actual altruistic donations (categorized as either making a donation or not, rather than the amount donated) were not a good predictor (ß = -.07). In terms of number of lifetime dating and sexual partners, however, the donation-by-sex interaction was significant, indicating that more charitable men – but not women – had a greater number of relationships and sexual partners (perhaps suggesting that charitable men tend to have more, but shorter, relationships, which isn’t necessarily a good thing for the women involved). Donations also failed to predict the amount of sex participants had been having in their relationship in the last month.

Guess the blood drive just isn’t a huge turn on after all

With these results in mind, there are two main points I wanted to draw attention to. The first of these concerns the measures of altruism in general: effectively charitable behaviors to strangers. While such a behavior might be a more “pure” form of altruistic tendencies as compared with, say, helping a friend move or giving money to your child, it does pose some complications for the present topic. Specifically, when looking for a desirable mate, people might not want someone who is just generally altruistic. After all, it doesn’t always do me much good if my committed partner is spending time and investing resources in other people. I would probably prefer that resources be preferentially directed at me and those I care about, rather than strangers, and I might especially dislike it if altruism directed towards strangers came at my expense (as the same resources can’t be invested in me and someone else most of the time). While it is possible that such investments in strangers could return to me later in the form of them reciprocating such aid to my partner, it seems unlikely that deficit would be entirely and consistently made up, let alone surpassed.

To make the point concrete, if someone was equally altruistic towards all people, there would be little point in forming as kind of special relationship with that kind person (friendships or otherwise) because you’d get the same benefits from them regardless of how much you invested in them (even if that amount was nothing).

This brings me to the second point I wanted to discuss: the matter of why people like the company of altruists. There are two explanations that come to mind. The first explanation is simple: people like access to resources, and altruists tend to provide them. This explanation should hardly require much in the way of testing given its truth is plainly obvious. The second explanation is more complex, and it’s one the authors favor: altruism honestly signals some positive, yet difficult-to-observe quality about the altruist. For instance, if I were to donate blood, or my time to clean up a park, this would tell you something about my underlying genetic qualities, as an individual in worse condition couldn’t shoulder the costs of altruism effectively. In this sense, altruism functions in a comparable manner to a peacock’s tail feathers; it’s a biologically-honest signal because it’s costly.

While it does have some plausibility, this signaling explanation runs into some complications. First, as the authors note, women donated more than men did (70% to 57%), despite donating predicting sexual behavior better for men. If women were donating to signal some positive qualities in the mating domain, it’s not at all clear it was working. Further, patterns of charitable donations in the US show a U-shaped distribution, whereby those with access to the most and  the fewest financial resources tend to donate more than those in the middle. This seems like a pattern the signaling explanation should not predict if altruism is meaningfully and consistently tied to important, but difficult-to-observe biological characteristics. Finally, while the argument could be made that altruism directed towards friends, sexual partners, and kin are not necessarily indicative of someone’s willingness to donate to strangers (i.e., how altruistic they are dispositionally might not predict how nepotistic they are), well, that’s kind of a problem for the altruism-as-signaling model. If donations towards strangers are fairly unpredictive of altruism towards closer relations, then they don’t really tell you what you want to know.  Specifically, if you want to know how good of a friend or dating partner someone would be for you, a better cue is how much altruism they direct towards their friends and romantic partners; not how much they direct to strangers.

“My boyfriend is so altruistic, buying drinks for other women like that”

Last, we can consider the matter of why people behave altruistically, with respect to the mating domain. (Very) broadly speaking, there are two primary challenges people need to overcome: attracting a mate and retaining them. Matters get tricky here, as altruism can be used for both of these tasks. As such, a man who is generally altruistic towards lot of people might be using altruism as a means of attracting the attention of prospective mates without necessarily intending to keep them around. Indeed, the previous point about how altruistic men report having more relationships and sexual partners could be interpreted in just such a light. There are other explanations, of course, such as the prospect that generally selfish people simply don’t have many relationships at all, but these need to be separated out. In either case, in terms of how much altruism we provide to others, I suspect that the amount provided to strangers and charitable organizations only makes up a small fraction; we give much more towards friends, family, and lovers regularly. If that’s the case, measuring someone’s willingness to donate in those fairly uncommon contexts might not capture their desirability as partner as well as we would like.

References: Arnocky, S., Piche, T., Albert, G., Ouellette, D., & Barclay, P. (2016). Altruism predicts mating success in humans. British Journal of Psychology, DOI:10.1111/bjop.12208

 

Morality, Alliances, And Altruism

Having one’s research ideas scooped is part of academic life. Today, for instance, I’d like to talk about some research quite similar in spirit to work I intended to do as part of my dissertation (but did not, as it didn’t end up making the cut in the final approved package). Even if my name isn’t on it, it is still pleasing to see the results I had anticipated. The idea itself arose about four years ago, when I was discussing the curious case of Tucker Max’s donation to Planned Parenthood being (eventually) rejected by the organization. To quickly recap, Tucker was attempting to donate half-a-million dollars to the organization, essentially receiving little more than a plaque in return. However, the donation was rejected, it would seem, under fear of building an association between the organization and Tucker, as some people perceived Tucker to be a less-than-desirable social asset. This, of course, is rather strange behavior, and we would recognize it as such if it were observed in any other species (e.g., “this cheetah refused a free meal for her and her cubs because the wrong cheetah was offering it”); refusing free benefits is just peculiar.

“Too rich for my blood…”

As it turns out, this pattern of behavior is not unique to the Tucker Max case (or the Kim Kardashian one…); it has recently been empirically demonstrated by Tasimi & Wynn (2016), who examined how children respond to altruistic offers from others, contingent on the moral character of said others. In their first experiment, 160 children between the ages of 5 and 8 were recruited to make an easy decision; they were shown two pictures of people and told that the people in the pictures wanted to give them stickers, and they had to pick which one they wanted to receive the stickers from. In the baseline conditions, one person was offering 1 sticker, while the other was offering either 2, 4, 8, or 16 stickers. As such, it should come as no surprise that the person offering more stickers was almost universally preferred (71 of the 80 children wanted the person offering more, regardless of how many more).

Now that we’ve established that more is better, we can consider what happened in the second condition where the children received character information about their benefactors. One of the individuals was said to always be mean, having hit someone the other day while playing; the other was said to always be nice, having hugged someone the other day instead. The mean person was always offering more stickers than the nice one. In this condition, the children tended to shun the larger quantity of stickers in most cases: when the sticker ratio was 2:1, less than 25% of children accepted the larger offer from the mean person; the 4:1 and 8:1 ratios were accepted about 40% of the time, and the 16:1 ratio 65% of the time. While more is better in general, it is apparently not better enough for children to overlook the character information at times. People appear willing to forgo receiving altruism when it’s coming from the wrong type of person. Fascinating stuff, especially when one considers that such refusals end up leaving the wrongdoers with more resources than they would otherwise have (if you think someone is mean, wouldn’t you be better off taking those resources from them, rather than letting them keep them?).

This line was replicated in 64 very young children (approximately one-year old). In this experiment, the children observed a puppet show in which two puppets offered them crackers, with one offering a single cracker and the other offering either 2 or 8. Again, unsurprisingly, the majority of children accepted the larger offer, regardless of how much larger it was (24 of 32 children). In the character information condition, one puppet was shown to be a helper, assisting another puppet in retrieving a toy from a chest, whereas the other puppet was a hinderer, preventing another from retrieving a toy. The hindering puppet, as before, now offered the greater number of crackers, whereas the helper only offered one cracker. When the hindering puppet was offering 8 crackers, his offer was accepted about 70% of the time, which did not differ from the baseline group. However, when the hindering puppet was only offering 2, the acceptance rate was a mere 19%. Even young children, it would seem, are willing to avoid accepting altruism from wrongdoers, assuming the difference in offers isn’t too large.

“He’s not such a bad guy once you get $10 from him”

While neat, these results beg for a deeper explanation as to why we should expect such altruism to be rejected. I believe hints of this explanation are provided by the way Tasimi & Wynn (2016) write about their results:

Taken together, these findings indicate that when the stakes are modest, children show a strong tendency to go against their baseline desire to optimize gain to avoid ‘‘doing business” with a wrongdoer; however, when the stakes are high, children show more willingness to ‘‘deal with the devil…”

What I find strange about that passage is that children in the current experiments were not “doing business” or “making deals” with the altruists; there was no quid pro quo going on. The children were no more doing business with the others than they are doing business with a breastfeeding mother. Nevertheless, there appears to an implicit assumption being made here: an individual who accepts altruism from another is expected to pay that altruism back in the future. In other words, merely receiving altruism from another generates the perception of a social association between the donor and recipient.

This creates an uncomfortable situation for the recipient in cases where the donor has enemies. Those enemies are often interested in inflicting costs on the donor or, at the very least, withholding benefits from him. In the latter case, this makes that social association with the donor less beneficial than it otherwise might, since the donor will have fewer expected future resources to invest in others if others don’t help him; in the former case, not only does the previous logic hold, but the enemies of your donor might begin to inflict costs on you as well, so as to dissuade you from helping him. Putting this into a quick example Jon – your friend – goes out an hurts Bob, say, by sleeping with Bob’s wife. Bob and his friends, in response, both withhold altruism from Jon (as punishment) and might even be inclined to attack him for his transgression. If they perceive you as helping Jon – either by providing him with benefits or by preventing them from hurting Jon – they might be inclined to withhold benefits from or punish you as well until you stop helping Jon as a means of indirect punishment. To turn the classic phrase, the friend of my enemy is also my enemy (just as the enemy of my enemy is my friend).

What cues might they use to determine if you’re Jon’s ally? Well, one likely useful cue is whether Bob directs altruism towards you. If you are accepting his altruism, this is probably a good indication that you will be inclined to reciprocate it later (else risk being labeled a social cheater or free rider). If you wish to avoid condemnation and punishment by proxy, then, one route to take is to refuse benefits from questionable sources. This risk can be overcome, however, in cases where the morally-questionable donor is providing you a large enough benefit which, indeed, was precisely the pattern of results observed here. What will determine what counts as “large enough” should be expected to vary as a function of a few things, most notably the size and nature of the transgressions, as well as the degree of expected reciprocity. For example, receiving large donations from morally-questionable donors should be expected to be more acceptable to the extent the donation is made anonymously vs publicly, as anonymity might reduce the perceived social associations between donor and recipient.

You might also try only using “morally clean” money

Importantly (as far as I’m concerned) this data fits well within my theory of morality – where morality is hypothesized to function as an association-management mechanism – but not particularly well with other accounts: altruistic accounts of morality should predict that more altruism is still better, dynamic coordination says nothing about accepting altruism, as giving isn’t morally condemned, and self-interest/mutualistic accounts would, I think, also suggest that taking more money would still be preferable since you’re not trying to dissuade others from giving. While I can’t help but feel some disappointment that I didn’t carry this research out myself, I am both happy with the results that came of it and satisfied with the methods utilized by the authors. Getting research ideas scooped isn’t so bad when they turn out well anyway; I’m just happy enough to see my main theory supported.  

References: Tasimi, A. & Wynn, K. (2016). Costly rejection of wrongdoers by infants and children. Cognition, 151, 76-79.

Benefiting Others: Motives Or Ends?

The world is full of needy people; they need places to live, food to eat, medical care to combat biological threats, and, if you ask certain populations in the first world, a college education. Plenty of ink has been spilled over the matter of how to best meet the needs of others, typically with a focus on uniquely needy populations, such as the homeless, poverty-stricken, sick, and those otherwise severely disadvantaged. In order to make meaningful progress in such discussions, there arises the matter of precisely why - in the functional sense of the word – people are interested in helping others, as I believe the answer(s) to that question will be greatly informative when it comes to determining the most effective strategies for doing so. What is very interesting about these discussions is that the focus is frequently placed on helping others altruistically; delivering benefits to others in ways that are costly for the person doing the helping. The typical example of this involves charitable donations, where I would give up some of my money so that someone else can benefit. What is interesting about this focus is that our altruistic systems often seem to face quite a bit of pushback from other parts of our psychology when it comes to helping others, resulting in fairly poor deliveries of benefits. It represents a focus on the means by which we help others, rather than really serving to improve the ends of effective helping. 

For instance. this sign isn’t asking for donations

As a matter of fact, the most common ways of improving the lives of others doesn’t involve any altruism at all. For an alternative focus, we might consider the classic Adam Smith quote pertaining to butchers and bakers:

But man has almost constant occasion for the help of his brethren, and it is in vain for him to expect it from their benevolence only. He will be more likely to prevail if he can interest their self-love in his favour, and show them that it is for their own advantage to do for him what he requires of them. Whoever offers to another a bargain of any kind, proposes to do this. Give me that which I want, and you shall have this which you want, is the meaning of every such offer; and it is in this manner that we obtain from one another the far greater part of those good offices which we stand in need of. It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.

In short, Smith appears to recommend that, if we wish to effectively meet the needs of others (or have them meet our needs), we must properly incentivize that other-benefiting behavior instead of just hoping people will be willing to continuously suffer costs. Smith’s system, then, is more mutualistic or reciprocal in nature. There are a lot of benefits to trying to use these mutualistic and reciprocally-altruistic cognitive mechanisms, rather than altruistic ones, some of which I outlined last week. Specifically, altruistic systems typically direct benefits preferentially towards kin and social allies, and such a provincial focus is unlikely to deliver benefits to the needy individuals in the wider world particularly well (e.g., people who aren’t kin or allies). If, however, you get people to behave in a way that benefits themselves and just so happen to benefit others as a result, you’ll often end up with some pretty good benefit delivery. This is because you don’t need to coerce people into helping themselves.  

So let’s say we’re faced with a very real-world problem: there is a general shortage of organs available for people in need of transplants. What cognitive systems do we want to engage to solve that problem? We could, as some might suggest, make people more empathetic to the plight of those suffering in hospitals, dying from organ failure; we might also try to convince people that signing up as an organ donor is the morally-virtuous thing to do. Both of these plans might increase the number of people willing to posthumously donate their organs, but perhaps there are much easier and effective ways to get people to become organ donors even if they have no particular interest in helping others. I wanted to review two such candidate methods today, neither of which require that people’s altruistic cognitive systems be particular engaged.

The first method comes to us from Johnson & Goldstein (2003), who examine some cross-national data on rates of organ donor status. Specifically, they note an oddity in the data: very large and stable differences exist between nations in organ donor status, even after controlling for a number of potentially-relevant variables. Might these different rates exist because of people’s preferences for being an organ donor varying markedly between countries? It seems unlikely, unless people in Germany have an exceedingly unpopular opinion toward being an organ donor (14% are donors, from the figures cited), while people in Sweden are particularly interested in it (86%). In fact, in the US, support for organ donation is at near ceiling levels, yet a large gap persists between those who support it (95%) and those who indicated on a driver’s license they were donors (51% in 2005; 60% in 2015) or who had signed a donor card (30%). If it’s not people’s lack of support for such a policy, what is explaining the difference?

A poor national sense for graphic design?

Johnson & Goldstein (2003) float a simple explanation for most of the national differences: whether donor programs were opt-in or opt-out. What that refers to is the matter of, assuming someone has made no explicit decision as to what happens to their organs after they die, what decision would be treated as the default? In opt-in countries (like Germany and the US), non-donor status would be assumed unless someone signs up to be a donor; in opt-out countries, like Sweden, people are assumed to be donors unless they indicate that they do not wish to be one. As the authors report, the opt-in countries have much lower effective consent rates (on average, 60% lower) and the two groups represent non-overlapping populations. That data supplements the other experimental findings from Johnson & Goldstein (2003) as well. The authors had 161 participants take part in an experiment where they were asked to imagine they had moved to a new state. This state either treated organ donation as the default option or non-donation as the default, and participants were asked whether they would like to confirm or change their status. There was also a third condition where no default answer was provided. When no default answer was given, 79% of participants said they would be willing to be an organ donor; a percentage which did not differ from those who confirmed their donor status when it was the default (82%). However, when non-donor status was the default, only 42% of the participants changed their status to donor. 

So defaults seem to matter quite a bit, but let’s assume that a nation isn’t going to change its policy from opt-in to opt-out anytime soon. What else might we do if we wanted to improve the rates of people signing up to be an organ donor in the short term? Eyting et al (2016) tested a rather simple method: paying people €10. The researchers recruited 320 German university students who did not currently have an organ donor card and provided them the opportunity to fill one out. These participants were split into three groups: one in which there was no compensation offered for filling out the card, one in which they would personally receive €10 for filling out a card (regardless of which choice they picked: donor or non-donor), and a final condition in which €10 would be donated to a charitable organization (the Red Cross) if they filled out a card. No differences were observed between the percentage of participants who filled out the card between the control (35%) and charity (36%) conditions. However, in the personal benefit group, there was a spike in the number of people filling out the card (72%). Not all those who filled out the cards opted for donor status, though. Between conditions, the percentage of people who both (a) filled out the card and (b) indicated they wanted to be a donor where about 44% in the personal payment condition, 28% in the control condition, and only 19% in the charity group. Not only did the charity appeal not seem particularly effective, it was even nominally counterproductive.

“I already donated $10 to charity and now they want my organs too?!”

Now, admittedly, helping others because there’s something in it for you isn’t quite as sexy (figuratively speaking) as helping because you’re driven by an overwhelming sense of empathy, conscience, or simply helping for no benefit at all. This is because there’s a lower signal value in that kind of self-beneficial helping; it doesn’t predict future behavior in the absence of those benefits. As such, it’s unlikely to be particularly effective at building meaningful social connections between helpers and others. However, if the current data is any indication, such helping is also likely to be consistently effective. If one’s goal is to increase the benefits being delivered to others (rather than building social connections), that will often involve providing valued incentives for the people doing the helping.

On one final note, it’s worth mentioning that these papers only deal with people becoming a donor after death; not the prospect of donating organs while alive. If one wanted to, say, incentivize someone to donate a kidney while alive, a good way to do so might be to offer them money; that is, allow people to buy and sell organs they are already capable of donating. If people were allowed to engage in mutually-beneficial interactions when it came to selling organs, it is likely we would see certain organ shortages decrease as well. Unfortunately for those in need of organs and/or money, our moral systems often oppose this course of action (Tetlock, 2000), likely contingent on perceptions about which groups would be benefiting the most. I think this serves as yet another demonstration that our moral sense might not be well-suited for maximizing the welfare of people in the wider social world, much like our empathetic systems don’t.

References: Eyting, M., Hosemann, A., & Johannesson, M. (2016). Can monetary incentives increase organ donations? Economics Letters, 142, 56-58.

Johnson, E. & Goldstein, D. (2003). Do defaults save lives? Science, 132, 1338-1339.

Tetlock, P. (2000). Coping with trade-offs: Psychological constraints and political implications. In Elements of Reason: Cognition, Choice, & the Bounds of Rationality. Ed. Lupia, A., McCubbins, M., & Popkin, S. 239-322.  

Morality, Empathy, And The Value Of Theory

Let’s solve a problem together: I have some raw ingredients that I would like to transform into my dinner. I’ve already managed to prepare and combine the ingredients, so all I have left to do is cook them. How am I to solve this problem of cooking my food? Well, I need a good source of heat. Right now, my best plan is to get in my car and drive around for a bit, as I have noticed that, after I have been driving for some time, the engine in my car gets quite hot. I figure I can use the heat generated by driving to cook my food. It would come as no surprise to anyone if you have a couple of objections with my suggestion, mostly focused on the point that cars were never designed to solve the problems posed by cooking. Sure, they do generate heat, but that’s really more of a byproduct of their intended function. Further, the heat they do produce isn’t particularly well-controlled or evenly-distributed. Depending on how I position my ingredients or the temperature they require, I might end up with a partially-burnt, partially-raw dinner that is likely also full of oil, gravel, and other debris that has been kicked up into the engine. Not only is the car engine not very efficient at cooking, then, it’s also not very sanitary. You’d probably recommend that I try using a stove or oven instead.

“I’m not convinced. Get me another pound of bacon; I’m going to try again”

Admittedly, this example is egregious in its silliness, but it does make its point well: while I noted that my car produces heat, I misunderstood the function of the device more generally and tried to use it to solve a problem inappropriately as a result. The same logic also holds in cases where you’re dealing with evolved cognitive mechanisms. I examined such an issue recently, noting that punishment doesn’t seem to do a good job as a mechanism for inspiring trust, at least not relative to its alternatives. Today I wanted to take another run at the underlying issue of matching proximate problem to adaptive function, this time examining a different context: directing aid to the great number of people around the world who need altruism to stave off death and non-lethal, but still quite severe, suffering (issues like alleviating malnutrition and infectious diseases). If you want to inspire people to increase the amount of altruism directed towards these needy populations, you will need to appeal to some component parts of our psychology, so what parts should those be?

The first step in solving this problem is to think about what cognitive systems might increase the amount of altruism directed towards others, and then examine the adaptive function of each to determine whether they will solve the problem particularly efficiently. Paul Bloom attempted a similar analysis (about three years ago, but I’m just reading it now), arguing that empathetic cognitive systems seem like a poor fit for the global altruism problem. Specifically, Bloom makes the case that empathy seems more suited to dealing with single-target instances of altruism, rather than large-scale projects. Empathy, he writes, requires an identifiable victim, as people are giving (at least proximately) because they identify with the particular target and feel their pain. This becomes a problem, however, when you are talking about a population of 100 or 1000 people, since we simply can’t identify with that many targets at the same time. Our empathetic systems weren’t designed to work that way and, as such, augmenting their outputs somehow is unlikely to lead to a productive solution to the resource problems plaguing certain populations. Rather than cause us to give more effectively to those in need, these systems might instead lead us to over-invest further in a single target. Though Bloom isn’t explicit on this point, I feel he would likely agree that this has something to do with empathetic systems not having evolved because they solved the problems of others per se, but rather because they did things like help the empathetic person build relationships with specific targets, or signal their qualities as an associate to those observing the altruistic behavior.

Nothing about that analysis strikes me as distinctly wrong. However, provided I have understood his meaning properly, Bloom goes on to suggest that the matter of helping others involves the engagement of our moral systems instead (as he explains in this video, he believes empathy “fundamentally…makes the world worse,” in the moral sense of the term, and he also writes that there’s more to morality – in this case, helping others – than empathy). The real problem with this idea is that our moral systems are not altruistic systems, even if they do contain altruistic components (in much the same way that my car is not a cooking mechanism even if it does generate heat). This can be summed up in a number of ways, but simplest is in a study by Kurzban, DeScioli, & Fein (2012) in which participants were presented with the footbridge dilemma (“Would you push one person in front of a train – killing them – to save five people from getting killed by it in turn?”). If one was interested in being an effective altruist in the sense of delivering the greatest number of benefits to others, pushing is definitely the way to go under the simple logic that five lives saved is better than one life spared (assuming all lives have equal value). Our moral systems typically oppose this conclusion, however, suggesting that saving the lives of the five is impermissible if it means we need to kill the one. What is noteworthy about the Kurzban et al (2012) paper is that you can increase people’s willingness to push the one if the people in the dilemma (both being pushed and saved) are kin.

Family always has your back in that way…

The reason for this increase in pushing when dealing with kin, rather than strangers, seems to have something to do with our altruistic systems that evolved for delivering benefits to close genetic relatives; what we call kin-selected mechanisms (mammary glands being a prime example). This pattern of results from the footbridge dilemma suggests there is a distinction between our altruistic systems (that benefit others) and our moral ones; they function to do different things and, as it seems, our moral systems are not much better suited to dealing with the global altruism problem than empathetic ones. Indeed, one of the main features of our moral systems is nonconsequentialism: the idea that the moral value of an act depends on more than just the net consequences to others. If one is seeking to be an effective altruist, then, using the moral system to guide behavior seems to be a poor way to solve that problem because our moral system frequently focuses on behavior per se at the expense of its consequences. 

That’s not the only reason to be wary of the power of morality to solve effective altruism problems either. As I have argued elsewhere, our moral systems function to manage associations with others, most typically by strategically manipulating our side-taking behavior in conflicts (Marczyk, 2015). Provided this description of morality’s adaptive function is close to accurate, the metaphorical goal of the moral system is to generate and maintain partial social relationships. These partial relationships, by their very nature, oppose the goals of effective altruism, which are decidedly impartial in scope. The reasoning of effective altruism might, for instance, suggest that it would be better for parents to spend their money not on their child’s college tuition, but rather on relieving dehydration in a population across the world. Such a conclusion would conflict not only with the outputs of our kin-selected altruistic systems, but can also conflict with other aspects of our moral systems. As some of my own, forthcoming research finds, people do not appear to perceive much of a moral obligation for strangers to direct altruism towards other strangers, but they do perceive something of an obligation for friends and family to help each other (specifically when threatened by outside harm). Our moral obligations towards existing associates make us worse effective altruists (and, in Bloom’s sense of the word, morally worse people in turn).

While Bloom does mention that no one wants to live in that kind of strictly utilitarian world – one in which the welfare of strangers is treated equally to the welfare of friends and kin – he does seem to be advocating we attempt something close to it when he writes:

Our best hope for the future is not to get people to think of all humanity as family—that’s impossible. It lies, instead, in an appreciation of the fact that, even if we don’t empathize with distant strangers, their lives have the same value as the lives of those we love.

Appreciation of the fact that the lives of others have value is decidedly not the same thing as behaving as if they have the same value as the ones we love. Like most everyone else in the world, I want my friends and family to value my welfare above the welfare of others; substantially so, in fact. There are obvious adaptive benefits to such relationships, such as knowing that I will be taken care of in times of need. By contrast, if others showed no particular care for my welfare, but rather just sought to relieve as much suffering as they could wherever it existed in the world, there would be no benefit to my retaining them as associates; they would provide with me assistance or they wouldn’t, regardless of the energy I spent (or didn’t) maintaining social relationship with them. Asking the moral system to be a general-purpose altruism device is unlikely to be much more successful than asking my car to be an efficient oven, that people to treat others the world over as if they were kin, or that you empathize with 1000 people. It represents an incomplete view as to the functions of our moral psychology. While morality might be impartial with respect to behavior, it is unlikely to be impartial with regard to the social value of others (which is why, also in my forthcoming research, I find that stealing to defend against an outside agent of harm is rated as more morally acceptable than doing so to buy recreational drugs).  

“You have just as much value to me as anyone else; even people who aren’t alive yet”

To top this discussion off, it is also worth mentioning those pesky, unintended consequences that sometimes accompany even the best of intentions. By relieving deaths from dehydration, malaria, and starvation today, you might be ensuring greater harm in future generations in the form of increasing the rate of climate change, species extinction, and habitat destruction brought about by sustaining larger global human populations. Assuming for the moment that was true, would that mean that feeding starving people and keeping them alive today would be morally wrong? Both options – withholding altruism when it could be provided and ensuring harm for future generations – might get the moral stamp of disapproval, depending on the reference group (from the perspective of future generations dealing with global warming, it’s bad to feed; from the perspective of the starving people, it’s bad to not feed). This is why the slight majority of participants in Kurzban et al (2012) reported that pushing and not pushing can both be morally unacceptable courses of action.  If we are relying on our moral sense to guide our behavior in this instance, then, we would unlikely be very successful in our altruistic endeavors.

References: Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptation for moral judgment. Evolution & Human Behavior, 33, 323-333.

Marczyk, J. (2015). Moral alliance strategies theory. Evolutionary Psychological Science, 1, 77-90.

Clues To The Function Of Moralistic Punishment

One of the major questions I’ve spent the better part of the last few years trying to work up an answer to is the matter of why – in the adaptive sense – people punish others as third parties moralistically. That is to ask why people condemn and punish others for engaging in behavior perceived to be “wrong”. This is not the same question as why people take revenge on others for their behavior; that kind of punishment isn’t terribly mysterious. Chimps, for instance, will punish those who cheated them, but they seem fairly uninterested in punishing those who cheated others (Riedl, Jensen, Call, & Tomasello, 2012). Humans are a bit different in that regard: not only do we involve ourselves in the disputes of others, but we do so on behalf of people who aren’t kin or even social allies. This is rather strange behavior, even though we do not usually perceive it to be so. I’m not the only one who has attempted an evolutionary explanation for moralistic punishment, of course; in my hubris, however, I do feel that the explanation I ended up floating tends to fit the existing data better than other, contemporary theories. Today, I wanted to try and demonstrate that fit by considering some relative-recent empirical research.

“I don’t know, man; I think it might be a little baggy. Try a size down”

The paper in question (Kurzban, DeScioli, & O’Brien, 2007) concerns audience effects on punishment. Specifically, it addresses the following question: do people punish others more (as third parties) when they are being watched and their decisions are being made public, relative to when they’re anonymous? The question was examined across two experiments. In the first of these, 43 participants viewed the outcome of a previous trust game in which one player had the choice between two different outcomes: $39 for himself and $1 for the other player, or $20 for both. In this case, the player had selected the former, selfish option. The participants in question were provided with $7 and told they could spend any amount of that money – from $0 to $7 – to deduct from the selfish player. For each dollar they spent on deductions, the selfish player would lose $3. Importantly, the participants were either doing so in conditions under which their spending would be unknown to anyone but themselves (n = 24), or conditions where the experimenter would known how much each particular participant spent (n = 19). In the anonymous condition, 38% of participants spent at least some money on punishment, whereas in the public condition, 47% opted to punish something. Importantly, however, the amount punished varied between these conditions. Those in the public condition tended to spend more on punishment than those in the anonymous condition when they did. For instance, though precise numbers are not reported, approximately 20% of the participants in the public condition paid the full $7 to punish the selfish player, as compared with 0% in the private condition.

The second experiment examined peoples’ responses to a different kind of economic game – a prisoner’s dilemma – where one player attempted to cooperate with their partner, but was met with defection. In terms of the outcome, this means that rather than both players getting $25 (if both had cooperated), the cooperative partner got $0 while the defector got $30. The participants in this experiment were shown this outcome and given $10. As before, the participants could spend any amount of this $10 to deduct from the defecting player at the 3-to-1 rate. In this case, the punishment decisions would either be known by (a) no one else but the participant (n = 31), (b) the experimenter alone (n = 26), or (c) the entire group of participants (n = 30). Again, fewer participants opted to punish at all when punishment was anonymous (42%) relative to when punishment was known to either the experimenter (65%) or the whole group (67%); the amount spent on punishment increased as well (Ms = $1.06, $2.54, and $3.17, respectively). Further, those in the public conditions also tended to display more anger, disgust, and contempt at the cheating, suggesting that they were more upset by the transgression when other people were watching (or they were at least pretending to be).

The existence of audiences seemed to have an important impact on determining moralistic punishment: not only did the presence of other people affect the percent of third parties willing to punish at all, but it also positively influenced how much they did punish. In a sentence, we could say that the presence of observers was being used as an input by the cognitive systems determining moralistic sentiments. While this may sound like a result that could have been derived without needing to run the experiments, the simplicity and predictability of these findings by no means makes them trivial on a theoretical level when it comes to answering the question, “what is the adaptive value of punishment?” Any theory seeking to explain morality in general – and moral punishment in particular – needs to be able to present a plausible explanation for why cues to anonymity (or lack thereof) are being used as inputs by our moral systems. What benefits arise from public punishment that fail to materialize in anonymous cases?

“If you’re good at something, never do it for free…or anonymously”

The first theoretical explanation for morality that these results cut against is the idea that our moral systems evolved to deliver benefits to other per se. One of the common forms of this argument is that our moral systems evolved because they delivered benefits to the wider group (in the form of maintaining beneficial cooperation between members) even if doing so was costly in terms of individual fitness. This argument clearly doesn’t work for explaining the present data, as the potential benefits that could be delivered to others by deterring cheating or selfishness do not (seem to) change contingent on anonymity, yet moral punishment does. 

These results also cut against some aspects of mutualistic theories for morality. This class of theory suggests that, broadly speaking, our moral sense responds primarily to behavior perceived to be costly to the punisher’s personal interests. In short, third parties do not punish perpetrators because they have any interest in the welfare of the victim, but rather because punishers can enforce their own interests through that punishment, however indirectly. To place that idea into a quick example, I might want to see a thief punished not because I care about the people he harmed, but rather because I don’t want to be stolen from and punishing the thief for their behavior reduces that probability for me. Since my interests in deterring certain behaviors do not change contingent on my anonymity, the mutualistic account might feel some degree of threat from the present data. As a rebuttal to that point, the mutualistic theories could make the argument that my punishment being made public would deter others from stealing from me to a greater extent than if they did not know I was the one responsible for punishing. “Because I punished theft in a case where it didn’t effect me,” the rebuttal goes, “this is a good indication I would certainly punish theft which did affect me. Conversely, if I fail to punish transgressions against others, I might not punish them when I’m the victim.” While that argument seems plausible at face value, it’s not bulletproof either. Just because I might fail to go out of my way to punish someone else who was, say, unfaithful in their relationship, that does not necessarily mean I would tolerate infidelity in my own. This rebuttal would require an appreciable correspondence between my willingness to punish those who transgress against others and those who do so against me. As much of the data I’ve seen suggests a weak-to-absent link in both humans and non-humans on that front, that argument might not hold much empirical water.

By contrast, the present evidence is perfectly consistent with the association-management explanation posited in my theory of morality. In brief, this theory suggests that our moral sense helps us navigate the social world, identifying good and bad targets of our limited social investment, and uses punishment to build and break relationships with them. Morality, essentially, is an ingratiation mechanism; it helps us make friends (or, alternatively, not alienate others). Under this perspective, the role of anonymity makes quite a bit of sense: if no one will know how much you punished, or whether you did at all, your ability to use punishment to manage your social associations is effectively compromised. Accordingly, third-party punishment drops off in a big way. On the other hand, when people will know about their punishment, participants become more willing to invest in it in the face of better estimated social return. This social return need not necessarily reside with the actual person being harmed, either (who, in this case, was not present); it can also come from other observers of punishment. The important part is that your value as an associate can be publicly demonstrated to others.

The first step isn’t to generate value; it’s to demonstrate it

The lines between these accounts can seem a bit fuzzy at times: good associates are often ones who share your values, providing some overlap between mutualistic and association accounts. Similarly, punishment, at least from the perspective of the punisher, is altruistic: they are suffering a cost to provide someone else with a benefit. This provides some overlap between the association and altruistic accounts as well. The important point for differentiating these accounts, then, is to look beyond their overlap into domains where they make different predictions in outcomes, or predict the same outcome will obtain, but for different reasons. I feel the results of the present research not only help do that (inconsistent with group selection accounts), but also present opportunities for future research directions as well (such as the search for whether punishment as a third party appreciably predicts revenge).

References: Kurzban, R., DeScioli, P., & O’Brien, E. (2007). Audience effects on moralistic punishment. Evolution & Human Behavior, 28, 75-84.

Riedl, K., Jensen, K., Call, J., & Tomasello, M. (2012). No third-party punishment in chimpanzees. Proceedings of the National Academy of Science, 109, 14824–14829

The Altruism Of The Rich And The Poor

Altruistic behavior is a fascinating topic. On the first hand, it’s something of an evolutionary puzzle as to why an organism would provide benefits to others at an expense to itself. A healthy portion of this giving has already been explained via kin selection (providing resources to those who share an appreciable portion of your genes) and reciprocal altruism (giving to you today increases the odds of you giving to me in the future). As these phenomenon have, in a manner of speaking, been studied to death, they’re a bit less interesting; all the academic glory goes to people who tackle new and exciting ideas. One such new and exciting realm of inquiry (new at least as far as I’m aware of, anyway) concerns the social regulations and sanctions surrounding altruism. A particularly interesting case I came across some time ago concerned people actually condemning Kim Kardashian for giving to charity; specifically, for not giving enough. Another case involved the turning away of a sizable charitable donation from Tucker Max so as to avoid a social association with him.

*Unless I disagree with your personality; in that case, I’ll just starve

Just as it’s curious that people are altruistic towards others at all, then, it is, perhaps, more curious that people would ever turn down altruism or condemn others for giving it. To examine one more example that crossed my screen today, I wanted to consider two related articles. The first of the articles concerns charitable giving in the US. The point I wanted to highlight from that piece is that, as a percentage of their income, the richest section of the population tends to give the largest portion to charity. While one could argue that this is obviously the case because the rich have more available money which they don’t need to survive, that idea would fail to explain the point that charitable giving appears to evidence a U-shaped distribution, in which the richest and poorest sections of the population contribution a greater percentage of their income than those in the middle (though how to categorize the taxes paid by each group is another matter). The second article I wanted to bring up condemned the richer section of the population for giving less than they used to, compared to the poor, who had apparently increased the percentage they used to give. What’s notable about their analysis of the issue is that the former fact – that the rich still tended to donate a higher percentage of their income overall – is not mentioned at all. I imagine that such an omission was intentional.

Taken together, all these pieces of information are consistent with the idea that there’s a relatively opaque strategic element which surrounds altruistic behavior. While it’s one people might unconsciously navigate with relative automaticity, it’s worthwhile to take a step back and consider just how strange this behavior is. After all, if we saw this behavior in any other species, we would be very curious indeed as to what led them to do what they did; perhaps we would even forgoing the usual moralization that accompanies and clouds these issues while we examined them. So, on the subject of rich people and strategic altruism, I wanted to review a unique data set from Smeets, Bauer, & Gneezy (2015) concerning the behavior of millionaires in two standard economic games: the dictator and ultimatum games. In the former, participants are in charge of deciding how €100 will be divided between themselves and another participant; in the latter, the participant will propose how €100 will be split between themselves and a receiver. If the receiver accepts the offer, both players get paid the division; if the receiver rejects it, both players get nothing.

In the dictator game, approximately 200 Dutch millionaires (those with over €1,000,000 in their bank accounts) where told they were either playing the game with another millionaire or with a low-income receiver. According to data from existing literature on these games, the average amount given to the receiver in a dictator game is a little shy of 30%, with only about 5% of dictators allocating all the money to the recipient. In start contrast, when paired with a low-income individual, millionaire dictators tended to give an average of 71% of the money to the other player, with 45% of dictators giving the full €100. When paired with another millionaire recipient, however, the millionaire dictators only gave away approximately 50% of the €100 sum which, while still substantially more generous than the literature average, is less generous than their giving towards the poor.

The rich; maybe not as evil and cold as they’re imagined to be

Turning to the data from the ultimatum games, we often find that people are often more generous in their offers to receivers in such circumstances, owing to the real possibility that a rejected offer can leave the proposer without anything. Indeed, the reported percentage of the offers in ultimatum games from the wider literature is close to 45% of the total sum (as compared with 30% in dictator games). In the ultimatum game, the millionaires were actually less generous towards the low-income recipients than in the dictator game – bucking the overall trend – but were still quite generous overall, giving an average of 64% of the total sum, with 30% of dictators giving away the full €100 to the other person (as compared with 71% and 45% from above). Interestingly, when paired with other millionaires in the ultimatum game, millionaire proposers gave precisely the same amounts they tended to in the dictator games. In that case, the strategic context has no effect on their giving.

In sum, millionaires tended to evidence quite a bit more generosity in giving contexts than previous, lower-income samples had. However, this generosity was largely confined to instances of giving to those in greater need, relative to a more general kind of altruism. In fact, if one was in need and interested in receiving donations from rich targets, it would seem to serve your goal better to not frame the request as some kind of exchange relationship through which the rich person will eventually receive some monetary benefits, as that kind of strategic element appears to result in less giving.

Why should this be the case, though? One possible explanation that comes to mind builds upon the ostensibly obvious explanation for rich people giving more I mentioned initially: the rich already possess a great number of resources they don’t require. In economic terms, the marginal value of additional money for them is lower than it is for the poor. When the giving is economically strategic, then, the benefit to be received is more money, which, as I just suggested, has a relatively low marginal value to the rich recipient. By contrast, when the giving is driven more by altruism, the benefits to be receiver are predominately social in nature: the gratitude of the recipients, possible social status from observers, esteem from peers, and so on. The other side of this giving coin, as I also mentioned at the beginning, is there can also be social costs associated with not giving enough for the rich. As building social alliances and avoiding condemnation might have different marginal values than additional units of money, the rich could perceive greater benefits from giving in certain contexts, relative to exchange relationships.

Threats – implicit or explicit – do tend to be effective motivators for giving

Such an explanation could also, at least in principle, help explain why the poorest section of the population tends to be relatively charitable, compared to the middle: the poorest individuals are facing a greater need for social alliances, owing to the relatively volatile nature of their position in life. As economic resources might not be stable, poorer individuals might be better served by using more of them to build stronger social networks when money is available. Such spending would allow the poor to hedge and defend against the possibility of future bad luck; that friend you helped out today might be able to give you a place to sleep next month if you lose your job and can’t make rent. By contrast, those in the middle of the economic world are not facing the same degree of social need as the lower classes, while, at the same time, not having as much disposal income as the upper classes (and, accordingly, might also be facing less social pressure to be generous with what they do have), leading to them giving less. Considerations of social need guiding altruism also fits nicely with the moral aspect of altruism, which is just one more reason for me to like it.

References: Smeets, P., Bauer, R., & Gneezy, U. (2015). Giving behavior of millionaires. Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.1507949112

The Morality Of Guilt

Today, I wanted to discuss the topic of guilt; specifically, what the emotion is, whether we should consider it to be a moral emotion, and whether it generates moral behavioral outputs. The first part of that discussion will be somewhat easier to handle than the latter. In the most common sense, guilt appears to an emotion aroused by the perception of wrong-doing which has harmed someone else on the part of the individual experiencing guilt. The negative feelings that accompany guilt often lead to the guilty party desiring to make amends to the injured one so as to compensate the damage done and repair the relationship between the two (e.g., “I’m sorry that totaled your car by driving it into your house; I feel like a total heel. Let me buy you dinner to make up for it”). Because the emotion appears to be aroused by the perceptions of a moral transgression – that is, someone feels they have done something wrong, or impermissible –  it seems like guilt could rightly be considered a moral emotion; specifically, an emotion related to moral conscience (a self regulating mechanism), rather than moral condemnation (an other regulating mechanism).

Nothing beats packing for a nice, relaxing guilt trip

The understanding that guilt is a moral emotion, then, allows us to inform our opinion about what kind of thing morality is by examining how guilt works in greater, proximate detail. In other words, we can infer what adaptive value our moral sense might have had through studying the form of the emotional guilt mechanisms: what inputs they use and what outputs they produce. This brings us to some rather interesting work I recently dug out of my backlog of papers to read, by de Hooge et al (2011), that focused on figuring out what kinds of effects guilt tends to have on people’s behavior when you take guilt out of a dyadic (two-person) relationship and drop it into larger groups of people. The authors were interested, in part, on deciding whether or not guilt could be classified as a morally good emotion. While they acknowledge guilt is a moral emotion, they question whether it produces morally good outcomes in certain types of situations.

This leads naturally to the following question: what is a morally good outcome? The answer to that question is going to depend on what type of function one thinks morality has. In this case, de Hooge et al (2011) write as if our moral sense is an altruism device – one that functions to deliver benefits to others at a cost to one’s self. Accordingly, a morally good outcome is going to be one that results in benefits flowing to others at a cost to the actor. Framed in terms of guilt, we might expect that individuals experiencing guilt will behave more altruistically than individuals who are not; the guilty’s regard for the welfare of others will be regulated upwards, with a corresponding down-regulation placed on their own welfare. The authors note that much of the previous research on guilt has uncovered evidence consistent with that pattern: guilty parties tend to forgo benefits to themselves or suffer costs in order to deliver benefits to the party they have wronged. This makes guilt look rather altruistic.

Such research, however, was typically conducted in a two-party context: the guilty party and their victim. This presents something of an interpretative issue, inasmuch as the guilty party only has that one option available to them: if, say, I want to make you better off, I need to suffer a cost myself. While that might make the behavior look altruistic in nature, in the social world that we reside within, that is usually not the only option available; I could, for instance, also make you better off not at an expense to myself, but rather at the expense of someone else; an outcome most people wouldn’t exactly call altruism, and one de Hooge et al (2011) wouldn’t consider morally good either. To the extent a guilty party is interested in making their victim better off in both case, both outcomes would look the same in a two-party case; to the extent the guilty party is interested in behaving altruistically towards the victimized party, though, things would look different in a three-party context.

As they usually do…

de Hooge et al (2011) report on the results of three pilot studies and four experiments examining how guilt affects behavior in these three-party contexts in terms of welfare-relevant choices. While I don’t have time to discuss all of what they did, I wanted to highlight one of their experiments in more detail while noting that each of them generated data consistent with the same general pattern. The experiment I will discuss is their third one. In that experiment, 44 participants were assigned to either a guilt or a control condition. In both conditions, the participants were asked to complete a two-part joint effort task with another person to earn payment rewards. Colored letters (red or green) would pop up on each player’s screens and the participant and their partner had to click a button quickly in order to complete the task: the participant would push the button if the letter was green, whereas their partner would have to push if the letter was red. In the first part of the task, the performance of both the participant and their partner would be earning rewards for the participant; in the second part, the pair would be earning rewards for the partner instead. Each reward was worth 8 units of what I’ll call welfare points.

The participants were informed that while they would receive the bonus from the first round, their partner would not receive a bonus from the second. In the control condition, the partner did not earn the bonus because of their own poor performance; in the guilt condition, the partner did not earn the bonus because of the participant’s poor performance. In the next phase of this experiment, the participants were presented with three pay offs: their own, their partner’s, and an unrelated individual from the experiment who had also earned the bonus. The participants were told that one of the three would be randomly assigned the chance to redistribute the earnings though, of course, the participants always received that assignment. This allowed participants to give a benefit to their partner, but to do so at either a cost to themselves or at a cost to someone else.

Out of the 8 welfare units the participants had earned, they opted to give an average of 2.2 of them to their partner in the guilt condition, but only 1 unit in the control condition, so guilt did seem to make the participants somewhat more altruistic. Interestingly, however, guilt made participants even more willing to take from the outside party: guilty parties took an average of 4.2 units from the third party for their partner, relative to the 2.5 units they took in the control condition. In short, the participants appeared to be interested in repairing the relationship between themselves and their partners, but were more interested in doing so via taking from someone else, rather than giving up their own resources. Participants also viewed the welfare of the third party as being relatively unimportant as compared to the welfare of the partner they had ostensibly failed.

“To make up for hurting Mike, I think it’s only fair that Karen here suffers”

This returns us to the matter of what kind of thing morality is. de Hooge et al (2011) appear to view morality as an altruism device and view guilt as a moral emotion, yet, strangely, guilt did not appear to make people substantially more altruistic; instead, it seems to make them partial. Given that guilt was not making people behave more altruistically, we might want to reconsider the adaptive function of morality. What if, rather than acting as an altruism device, morality functions as an association management mechanism? If our moral sense functions to build and manage partial relationships, benefiting someone you’ve harmed at the expense of other targets of investment might make more sense. This is because there are good reasons to suspect that friendships represent partial allies maintained in the service of being able to win potential future disputes (DeScioli & Kurzban, 2009). These partial alliances are rank-ordered, however: I have a best friend, close friends, and more distant ones. In order to signal that I rank you highly as a friend, then, I need to demonstrate that I value you more than other people. Showing that I value you highly relative to myself – as would be the case with acts of altruism – would not necessarily tell you much about your value as my friend, relative to other friends. By contrast, behaving in ways that signal I value you more than others at least temporarily – as appeared to be the case in current experiments – could serve to repair a damaged alliance. Morality as an altruism device doesn’t fit the current pattern of data; an alliance management device does, though.

References: DeScioli, P. & Kurzban, R. (2009). The alliance hypothesis for human friendship. PLoS ONE 4(6): e5802. doi:10.1371/journal.pone.0005802

de Hooge, I. Nelissen R., Breugelmans, S., & Zeelenberg, M. (2011). What is moral about guilt? Acting “prosocially” at the disadvantage of others. Journal of Personality & Social Psychology, 100, 462-473.

 

Socially-Strategic Welfare

Continuing with the trend from my last post, I wanted to talk a bit more about altruism today. Having already discussed that people do, in fact, appear to engage in altruistic behaviors and possess some cognitive mechanisms that have been selected for that end, I want to move into discussing the matter of the variance in altruistic inclinations. That is to say that people – both within and between populations – are differentially inclined towards altruistic behavior, with some people appearing rather disinterested in altruism, while others appear quite interested in it. The question of interest for many is how those differences are to be explained. One explanatory route would be to suggest that the people in question have, in some sense, fundamentally different psychologies. A possible hypothesis to accompany that explanation might go roughly as follows: if people have spent their entire lives being exposed to social messages about how helping others is their duty, their cognitive mechanisms related to altruism might have developed differently than someone who instead spent their life being exposed to the opposite message (or, at least, less of that previous one). On that note, let’s consider the topic of welfare.

In a more academic fashion, if you don’t mind…

The official website of Denmark suggests that such a message of helping being a duty might be sent in that country, stating that:

The basic principle of the Danish welfare system, often referred to as the Scandinavian welfare model, is that all citizens have equal rights to social security. Within the Danish welfare system, a number of services are available to citizens, free of charge.

Provided that this statement accurately characterizes what we would consider the typical Danish stance on welfare, one might imagine that growing up in such a country could lead individuals to develop substantially different views about welfare than, say, someone who grew up in the US, where opinions are quite varied. In my non-scientific and anecdotal experience, while some in the US might consider the country a welfare state, those same people frequently seem to be the ones who think that is a bad thing; those who think it’s a good thing often seem to believe the US is not nearly enough of a welfare state. At the very least, the US doesn’t advertise a unified belief about welfare on its official site.

On the other hand, we might consider another hypothesis: that Danes and Americans don’t necessarily possess any different cognitive mechanisms in terms of their being designed for regulating altruistic behavior. Instead, members of both countries might possess very similar underlying cognitive mechanisms which are being fed different inputs, resulting in the different national beliefs about welfare. This is the hypothesis that was tested by Aaroe & Petersen (2014). The pair make the argument that part of our underlying altruistic psychology is a mechanism that functions to determine deservingness. This hypothetical mechanism is said to use inputs of laziness: in the presence of a perceived needy but lazy target, altruistic inclinations towards that individual should be reduced; in the presence of a needy, hard-working, but unlucky individual, these inclinations should be augmented. Thus, cross-national differences, as well as within-group differences, concerning support for welfare programs should be explained, at least in part, by perceptions of deservingness (I will get to the why part of this explanation later).

Putting those ideas together, two countries that differ on their willingness to provide welfare should also differ on their perceptions of the recipients in general. However, there are exceptions to every rule: even if you believe (correctly or incorrectly) that group X happens to be lazy and undeserving of welfare, you might believe that a particular member of group X bucks that trend and does deserve assistance. This is the same thing as saying that while men are generally taller than women, you can find exceptions where a particular woman is quite tall or a man quite short. This leads to a corollary prediction,that Aaroe & Petersen examine: despite decades of exposure to different social messages about welfare, participants from the US and Denmark should come to agree on whether or not a particular individual deserves welfare assistance.

     Never have I encountered a more deserving cause

The authors sampled approximately 1000 participants from both the US and Denmark; a sample designed to be representative of their home country’s demographics. That sample was then surveyed on their views about people who receive social welfare via a free-association task in which they were asked to write descriptors of those recipients. Words that referred to the recipients’ laziness or poor luck were coded to determine which belief was the more dominant one (as defined by lazy words minus unlucky one). As predicted, the lazy stereotype was dominant in the US, relative to Denmark, with Americans listing an average of 0.3 more words referring to laziness than luck; approximately four-times the size from Denmark, in which these two beliefs were more balanced.

In line with that previous finding was the fact that Americans were also more likely to support the tightening of welfare restrictions (M = 0.57) than the Danes (M = 0.49, scale 0-1). However, this difference between the two samples only existed under the condition of informational uncertainty (i.e., when participants were thinking about welfare recipients in general). When presented with a welfare recipient who was described as the victim of a work-related accident and motivated to return to work, the US and Danish citizens both agreed that welfare for restrictions for people like that person should not be tightened (M = 0.36 and 0.35 respectively); when this recipient was instead described as able-bodied but unmotivated to work, the Americans and Danes once again agreed, suggesting that welfare restrictions should be tightened for people like him (M = 0.76 and 0.79). In the presence of more individualizing information, then, the national stereotypes built over a lifetime of socialization appear to get crowded out, as predicted. All it took was about two sentences worth of information to get the US and Danish citizens to agree. This pattern of data would seem to support the hypothesis that some universal psychological mechanisms reside in both populations, and their differing views tend to be the result of their being fed different information.

This brings us to the matter of why people are using cues to laziness to determine who should receive assistance, which is not explicitly addressed in the body of the paper itself. If the psychological mechanisms in question function to reduce the need of others per se, laziness cues should not be relevant. Returning to the example from my last post, for instance, mothers do not tend to withhold breastfeeding from infants on the basis on whether those infants are lazy. Instead, breastfeeding seems better designed to reduce need per se in the infants. It’s more likely that mechanisms responsible for determining these welfare attitudes are instead designed to build lasting friendships (Tooby & Cosmides, 1996): by assisting an individual today, you increase the odds they will be inclined to assist you in the future. This altruism might be especially relevant when the individual is in more severe need, as the marginal value of altruism in such situations is larger, relative to when they’re less needy (in the same way that a very hungry individual values the same amount of food more than a slightly hungry one; the same food is simply a better return on the same investment when given to the hungrier party). However, lazy individuals are unlikely to be able to provide such reciprocal assistance – even if they wanted to – as the factors determining their need are chronic, rather than temporary. Thus, while both the lazy and motivated individual are needy, the lazy individual is the worse social investment; the unlucky one is much better.

Investing in toilet futures might not have been the wisest retirement move

In this case, then, perceptions of deservingness appear to be connected to adaptations that function to build alliances. Might perceptions of deservingness in other domains serve a similar function? I think it’s probable. One such domain is the realm of moral punishment, where transgressors are seen as being deserving of punishment. In this case, if victimized individuals make better targets of social investment than non-victimized ones (all else being equal), then we should expect people to direct altruism towards the former group; when it comes to moral condemnation, the altruism takes the form of assisting the victimized individual in punishing the transgressor. Despite that relatively minor difference, the logic here is precisely the same as my explanation for welfare attitudes. The moral explanation would require that moral punishment contains an alliance-building function. When most people think morality, they don’t tend to think about building friendships, largely owing to the impartial components of moral cognitions (since impartiality opposes partial friendships). I think that problem is easy enough to overcome; in fact, I deal with it in an upcoming paper (Marczyk, in press). Then again, it’s not as if welfare is an amoral topic, so there’s overlap to consider as well.

References: Aaroe, l. & Petersen, M. (2014). Crowding out culture: Scandinavians and Americans agree on social welfare in the face of deservingness cues. The Journal of Politics, 76, 684-697.

Marczyk, J. (in press). Moral alliance strategies theory. Evolutionary Psychological Science

Tooby, J. & Cosmides, L. (1996). Friendship and the banker’s paradox: Other pathways to the evolution of adaptations for altruism. Proceedings of the British Academy, 88, 119-143.

Phrasing The Question: Does Altruism Even Exist?

There’s a great value in being precise when it comes to communication (if you want your message to be understood as you intended it, anyway; when clarity isn’t the goal, by all means, be imprecise). While that may seem trivial enough, it is my general experience that many communicative conflicts in psychology arise because people are often unaware of, or at least less than explicit about, the level of analysis at which they’re speaking. As an example of these different levels of analysis, today I will consider a question that many people wonder about: does altruism really exist? While definitions do vary, perhaps the most common definition of altruism involves the benefiting of another individual at the expense of the actor. So, to rephrase the question a little, “Do people really benefit others at an expense to themselves, or are ostensibly altruistic acts merely self-interest in disguise?”

“I would have saved his life if you wouldn’t have thought me selfish for doing so”

There are three cases I’m going to consider to help demonstrate these different levels of analysis. The first two examples are human-centric, as they have a greater bearing on the initial question: reciprocal exchanges and breastfeeding. In the former case – reciprocal altruism – two individuals will provide benefits to each other in the hopes of receiving similar assistance in turn. This type of behavior is often summarized with the “you scratch my back and I’ll scratch yours” line. In the case of breastfeeding, the bodies of mammalian mothers will produce a calorically-valuable milk, which they allow their dependent offspring to feed on. This latter type of altruism is generally not reciprocal in nature, as most mothers do not breastfeed their infants in the hope of, say, their infant one day breastfeeding them.

But are these acts really altruistic? After all, if I’m doing a favor for you in the hopes that you’ll do one for me later, it seems that I’m not enduring a cost to provide you with a benefit; I’m enduring a cost to try and provide me with a benefit. As for breastfeeding, offspring share some of their mother’s genes, so allowing an infant to breastfeed is, in the genetic sense, beneficial for the mother; at least for a time, at which point the weaning conflicts tend to kick in. If the previous thoughts ran through your head, chances are that you’re thinking about some of the right things but in the wrong way, blurring the lines between different levels of analysis. Allow me to explain.

In a post last year, I discussed what are commonly known as the “big four” questions one might ask about a biological mechanism, like the psychological ones that generate altruistic behavior: how does it proximately (immediately) function; how does it develop over one’s life; what is its evolutionary history with respect to other species; and what its evolved function might be. These questions all require different considerations and evidence to answer and, in many cases, can be informative as to answers at other levels. Despite their mutually informative nature, they are nonetheless distinct.

The first and fourth questions (proximate and evolutionary functioning) are the most relevant for the current matter. Let’s start by considering reciprocal altruism. The first question we might ask concerns the proximate functioning of the behavior: do people behave in ways that deliver benefits to other individuals that carry costs for the actors? Well, we certainly seem to. Some of these acts of reciprocal altruism might be based on relatively short-lived and explicit exchanges: I give you my money, you give me your goods and/or services. In other cases, the exchanges might be longer lived and more implicit: I give you help today, you should be inclined to give me help down the road when I need it. To demonstrate that these acts are, in fact, altruistic, is relatively straightforward: in the first example, for instance, I would be better off getting my goods/services and keeping my money. The act of giving does not provide me with a direct benefit. Even though we might both benefit from the exchange (gains from trade, and all that), it doesn’t mean each portion of the exchange isn’t altruistic. Proximately, then, we can say that people are altruistic or, more conservatively, that people engage in altruistic behaviors from time to time.

Score one for team good

But how about the mechanisms generating these reciprocally-altruistic behaviors; do they function to deliver benefits to others? That is to ask whether these mechanisms were selected to deliver benefits to others. The answer to this question depends on which part of the system you’re looking at. In the broader sense, that answer is “no,” inasmuch as the cognitive systems for reciprocal exchanges appear to owe their existence to receiving benefits from others, rather than providing them; the providing is just instrumental to another goal. This would mean we have altruistic behavior that is the product of a non-altruistic system, which is a perfectly possible outcome. However, in a narrower sense, the answer to that question can be “yes,” inasmuch as the broader cognitive system engaged in reciprocal exchanges is made up of a number of subsystems: one such system needs to monitor the needs of others and generate behavior to deliver benefits to them in order for the system to work, making that bit seem to be adapted for providing altruism; another piece needs to monitor the return on those investments, down-regulating altruistic behavior in the absence of reciprocity (or other relevant cues). This is where being precise really begins to count: some parts of a system might be considered altruistic, while others are more selfish.

Now let’s turn to the breastfeeding example. Beginning again with the proximate question, breastfeeding certainly seems like an altruistic behavior: the mother’s body pays a metabolic cost to create the calorically-rich milk which is then consumed by the infant. So the mother is paying a biological cost to deliver a benefit to another individual, making the behavior altruistic. In the functional sense of the word, this behavior appears to be the result of adaptations for altruism: mothers of a number of mammalian species are found to breastfeed their infants with little apparent need for reciprocity. The reason they can do so, as I previously mentioned, is that the infants share some portion of their mother’s genes, so the mother is, by proxy, improving her reproductive success by helping her offspring survive and thrive. Importantly, one needs to bear in mind that explaining the persistence of these altruistic mechanisms over time with kin selection does not make them any less altruistic. In much the same way, while the manufacturing of cars owes its existence to the process being profitable, that doesn’t mean that I’m inclined to think of cars as really being devices designed to make money.

The third example of altruism I wanted to mention is an interesting one, involving a certain parasite – the Lancet Liver Fluke – that infects ants (among other things). In brief, this pathogen will impact an ant’s behavior such that the ant will dangle from the tip of a blade of grass, leading to being more likely to get eaten by a passing grazing animal (it then travels from the grazer to snails to ants and then back into the grazers; it’s a rather involved reproductive cycle). In the proximate sense, this behavior of the ant is altruistic inasmuch that the ant is suffering a cost – death – to deliver a benefit to the parasite. However, the ant possesses no cognitive mechanisms designed for this function; the adaptations for making the ant behave as it does are found within the parasite. In this case, while the proximate behavior of the ant might appear to be altruistic, it is not because of any altruistic adaptation on the part of the ant.

The new poster child for increasing altruism

Depending, then, on what one means by “really” when asking if something is “really” altruistic, one can get vastly different answers to the question. Some behavior may or may not be proximately altruistic, the system under consideration may contain both altruistic and non-altruistic mechanisms, and the extent of that altruism can also vary. These examples should highlight the considerable subtlety that underlies such analyses, hopefully impressing upon you the point that one can easily stumble, instead of progress, if ideas are not carefully selected and understood. There are, of course, other realms we could consider – like altruism that functions to signal traits about the actor, to gain social status, or whether the immediate motives of an actor are altruistic – but the general analyses, rather than their specific details, are what is important here. Thinking about what benefits organisms might reap through their altruistic behavior is a very valuable line of thought; it just shouldn’t be confused with other meaningful levels of thought.