The Value Of Association Value

Sometime ago I was invited to give a radio interview regarding a post I had written: The Politics of Fear. Having never been exposed to this kind of a format before, I found myself having to try and make some adjustments to my planned presentation on the fly, as it quickly became apparent that the interviewer was looking more for quick and overly-simplified answers, rather than anything with real depth (and who can blame him? It’s not like many people are tuning into the radio with the expectation of receiving anything resembling a college education). At one point I was posed with a question along the lines of, “how people can avoid letting their political biases get the better of them,” which was a matter I was not adequately prepared to answer. In the interests of compromise and giving the poor host at least something he could work with (rather than the real answer: “I have no idea; give me a day or two and I’ll see what I can find”), I came up with a plausible sounding guess: try to avoid social isolation of your viewpoints. In other words, don’t remove people from your friend groups or social media just because you disagree with they they say, and actively seek out opposing views. I also suggested that one attempt to expand their legitimate interests in the welfare of other groups in order to help take their views more seriously. Without real and constant challenges to your views, you can end up stuck in a political and social echo chamber, and that will often hinder your ability to see the world as it actually is.

“Can you believe those nuts who think flooding poses real risks?”

As luck would have it, a new paper (Almaatouq et al, 2016) fell into my lap recently that – at least in some, indirect extent – helps speak to the quality of the answer I had provided at the time (spoiler: as expected, my answer was pointing in the right direction but was incomplete and overly-simplified). The first part of the paper examines the shape of friendships themselves: specifically whether they tend to be reciprocal or more unrequited in one direction or the other. The second part leverages those factors to try and explain what kinds of friendships can be useful for generating behavioral change (in this case, getting people to be more active). Put simply, if you want to change someone’s behavior (or, presumably, their opinions) does it matter if (a) you think they’re your friend, but they disagree, (b) they think you’re their friend, but you disagree, (c) whether you both agree, and (d) how close you are as friends?

The first set of data reports on some general friendship demographics. Surveys were provided to 84 students in a single undergraduate course that asked to indicate, from 0-5, whether they considered the other students to be strangers (0), friends (3), or one of their best friends (5). The students were also asked to predict how each other student in the class would rate them.  In other words, you would be asked, “How close do you rate your relationship with X?” and “How close does X rate their relationship to you?” A friendship was considered mutual if both parties rated each other as at least a 3 or greater. There was indeed a positive correlation between the two ratings (r = .36), as we should expect: if I rate you highly as a friend, there should be a good chance you also rate me highly. However, that reality did diverge significantly from what the students predicted. If a student has nominated someone as a friend, their prediction as to how that person would rate them showed substantially more correspondence (r = .95). Expressed in percentages, if I nominated someone as a friend, I would expect them to nominate me back about 95% of the time. In reality, however, they would only do so about 53% of the time.

The matter of why this inaccuracy exists is curious. Almaatouq et al, (2016) put forward two explanations, one of which is terrible and one of which is quite plausible. The former explanation (which isn’t really examined in any detail, and so might just have been tossed in) is that people are inaccurate at predicting these friendships because non-reciprocal friendships “challenge one’s self-image.” This is a bad explanation because (a) the idea of a “self” isn’t consistent with what we know about how the brain works, (b) maintaining a positive attitude about oneself does nothing adaptive per se, and (c) it would need to posit a mind that is troubled by unflattering information and so chooses to ignore it, rather than the simpler solution of a mind that is simply not troubled by such information in the first place. The second, plausible explanation is that some of these ratings of friendships actually reflect some degree of aspiration, rather than just current reality: because people want friendships with particular others, they behave in ways likely to help them obtain such friendships (such as by nominating their relationship as mutual). If these ratings are partially reflective of one’s intent to develop them over time, that could explain some inaccuracy.

Though not discussed in the paper, it is also possible that perceivers aren’t entirely accurate because people intentionally conceal friendship information from others. Imagine, for instance, what consequences might arise for someone who finally works up the nerve to go tell their co-workers how they really feel about them. By disguising the strength of our friendships publicly, we can leverage social advantages from that information asymmetry. Better to have people think you like them than know you don’t in many cases.

 ”Of course I wasn’t thinking of murdering you to finally get some quiet”

With this understanding of how and why relationships can be reciprocal or asymmetrical, we can turn to the matter of how they might influence our behavior and, in turn, how satisfactory my answer was. The authors utilized a data set from the Friends and Family study, which had asked a group of 108 people to rate each other as friends on a 0-7 scale, as well as collected information about their physical activity level (passively, via a device in their smartphones). In this study, participants could earn money by becoming more physically active. In the control condition, participants could only see their own information; in the two social conditions (that were combined for analysis) they could see both their own activity levels and those of two other peers: in one case, participants earned a reward based only on their own behavior, and in the other the reward was based on the behavior of their peers (it was intended to be a peer-pressure condition). The relationship variables and conditions were entered into a regression to predict the participant’s change in physical activity.

In general, having information about the activity levels of peers tended to increase the activity of the participants, but the nature of those relationships mattered. Information about the behavior of peers in reciprocal friendships had the largest effect (b = 0.44) on affecting change. In other words, if you got information about people you liked who also liked you, this appeared to be most relevant. The other type of relationship that significantly predicted change was one in which someone else valued you as a friend, even if you might not value them as much (b = 0.31). By contrast, if you valued someone else who did not share that feeling, information about their activity didn’t seem to predict behavioral changes well (b = 0.15) and, moreover, the strength of friendships seemed to be rather besides the point (b = -0.04), which was rather interesting. Whether people were friends seemed to matter more than the depth of that friendship.

So what do these results tell us about my initial answer regarding how to avoid perceptual biases in the social world? This requires a bit of speculation, but I was heading in the right direction: if you want to affect some kind of behavioral change (in this case, reducing one’s biases rather than increasing physical activity), information from or about other people is likely a tool that could be effectively leveraged for that end. Learning that other people hold different views than your own could cause you to think about the matter a little more deeply, or in a new light. However, it’s often not going to be good enough to simply see these dissenting opinions in your everyday life if you want to end up with a meaningful change. If you don’t value someone else as an associate, they don’t value you, or neither of you value the other, then their opinions are going to be less effective at changing yours than they otherwise might be, relative to when you both value each other.

At least if mutual friendship doesn’t work, there’s always violence

The real tricky part of that equation is how one goes about generating those bonds with others who hold divergent opinions. It’s certainly not the easiest thing in the world to form meaningful, mutual friendships with people who disagree (sometimes vehemently) with your outlooks on life. Moreover, achieving an outcome like “reducing cognitive biases” isn’t even always an adaptive thing to do; if it were, it would be remarkable that those biases existed in the first place. When people are biased in their assessment of research evidence, for instance, they’re usually biased because something is on the line, as far as they’re concerned. It does an academic who has built his career on his personal theory no favors to proudly proclaim, “I’ve spent the last 20 years of my life being wrong and achieving nothing of lasting importance, but thanks for the salary and grant funding.” As such, the motivation to make meaningful friendships with those who disagree with them is probably a bit on the negative side (unless their hope is that through this friendship they can persuade the other person to adopt their views, rather than vice versa because – surely – the bias lies with other people; not me). As such, I’m not hopeful that my recommendation would play out well in practice, but at least it sounds plausible enough in theory.

References: Almaatouq, A., Radaelli, L., Pentland, A., & Shmueli, E. (2016). Are you your friends’ friends? Poor perception of friendship ties limits the ability to promote behavioral change. PLOS One, 11, e0151588. doi:10.1371/journal.pone.0151588

Science By Funeral

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

As the above quote by Max Planck suggests, science is a very human affair. While, in an idealized form, the scientific process is a very useful tool for discovering truth, the reality of using the process in the world can be substantially messier. One of the primary culprits of this messiness is that being a good scientist per se – as defined by one who rigorously and consistently applies the scientific method – is not necessarily any indication that one is particularly bright or worthy of social esteem. It is perfectly possible to apply the scientific method to the testing of any number of inane or incorrect hypotheses. Instead, social status (and its associated rewards) tends to be provided to people who discover something that is novel, interesting, and true. Well, sort of; the discovery itself need not be exactly true as much as people need to perceive the idea as being true. So long as people perceive my ideas to be true, I can reap those social benefits; I can even do so if my big idea was actually quite wrong.

Sure; it looks plenty bright, but it’s mostly just full of hot air

Just as there are benefits to being known as the person with the big idea, there are also benefits to being friends with the person with the big idea, as access to those social (and material) resources tends to diffuse to the academic superstar’s close associates. Importantly, these benefits can still flow to those associates even if they lack the same skill set that made the superstar famous. To put this all into a simple example, getting a professor position at Harvard likely carries social and material benefits to the professor; those who study under the professor and get a degree from Harvard can also benefit by riding the coattails of the professor, even if they aren’t particularly smart or talented themselves. One possible result of this process is that certain ideas can become entrenched in a field, even if the ideas are not necessarily the best: as the originator of the idea has a vested interest in keeping it the order of the day in his field, and his academic progeny have a similar interest in upholding the originator’s status (as their status depends on his), new ideas may be – formally or informally – barred from entry and resisted, even if they more closely resemble the truth. As Planck quipped, then, science begins to move forward as the old guard die out and can no longer defend their status effectively; not because they relinquish their status in the face of new, contradictory evidence.

With this in mind, I wanted to discuss the findings of one of the most interesting papers I’ve seen in some time. The paper (Azoulay, Fons-Rosen, & Zivin, 2015) examined what happens to a field of research in the life sciences following the untimely death of one of its superstar members. Azoulay et al (2015) began by identifying their sample of approximately 13,000 superstars, 452 of which died prematurely (which, in this case, corresponded to an average age of death at 61). Of those who died, the term “superstar” would certainly describe them well, at least in terms of their output, generating a median authorship on 138 papers, 8,347 citations, and receiving over $16 million in government funding by the time of their death. These superstars were then linked to various subfields in which they published, their collaborators and non-collaborators within those subfields were identified, and a number of other variables that I won’t go into were also collected.

The question of interest, then, is what happens to these fields following the death of a superstar? In terms of the raw number of publications within a subfield, there was a very slight increase following the death of about 2%. That number does not give much of a sense for the interesting things that were happening, however. The first of these things is that the superstar’s collaborators saw a rather steep decline in their research output; a decline of about 40% over time. However, this drop in productivity of the collaborators was more than offset by an 8% increase in output by non-collaborators. This was an effect that remained (though it was somewhat reduced) even when the analysis excluded papers on which the superstar was an author (which makes sense: if one of your authors dies, of course you will produce fewer papers; there was just more to the decline than that). This decline in collaborator output would be consistent with a healthy degree of coattail-riding likely taking place prior to death. Further, there were no hints of these trends prior to the death, suggesting that the death in question was doing the causing when it came to changes in research output.

Figure 2: How much better-off your death made other people

The possible “whys” as to these effects was examined in the rest of the paper. A number of hints as to what is going on follow. First, there is the effect of death on citation counts, with non-collaborators producing more high-impact – but not low-impact – papers after the superstar’s passing. Second, these non-collaborators were producing papers in the very same subfields that the superstar had previously been in. Third, this new work did not appear to be building on the work of the superstar; the non-collaborators tended to cite the superstar less and newer work more. Forth, the newer authors were largely not competitors of the superstar during the time they were alive, opting instead to become active in the field following the death. The picture being painted by the data seems to be one in which the superstars initially dominate publishing within their subfields. While new faces might have some interest in researching these same topics, they fail to enter the field while the superstar is alive, instead providing their new ideas – not those already established – only after a hole has opened in the social fabric of the field. In other words, there might be barriers to entry for newcomers keeping them out, and those barriers relax somewhat following the death of a prominent member.

Accordingly, Azoulay et al (2015) turn their attention to what kinds of barriers might exist. The first barrier they posit is one they call “Goliath’s Shadow”, where newcomers are simply deterred by the prospect of having to challenge existing, high-status figures. Evidence consistent with this prospect was reported: the importance of the superstar – as defined by the fraction of papers in the field produced by them – seemed to have a noticeable effect, with more important figures creating a larger void to fill. By contrast, the involvement of the superstar – as defined by what percentage of their papers were published in a given field – did not seem to have an effect. The more a superstar published (and received grant money), the less room other people seemed to see for themselves. 

Two other possible barriers to entry concern the intellectual and social closure of a field: the former refers to the degree that most of the researchers within a field – not just the superstar – agree on what methods to use and what questions to ask; the latter refers to how tightly the researchers within a field work together, coauthoring papers and such. Evidence for both of these came up positive: fields in which the superstar trained many of the researchers in it and fields in which people worked very closely did not show the major effects of superstar death. Finally, a related possibility is that the associates of the superstar might indirectly control access to the field by denying resources to newcomers who might challenge the older set of ideas. In this instance, the authors reported that the deaths of those superstars who had more collaborators on editorial and funding boards tended to have less of an impact, which could be a sign of trouble. 

The influence of these superstars on generating barriers to entry, then, were often quite indirect. It’s not that the superstars were preventing newcomers themselves; it is unlikely they had the power to do so, even if they were trying. Instead, these barriers were created indirectly, either through the superstar receiving a healthly portion of the existing funding and publication slots, or through the collaborators of the superstar forming a relatively tight-knit community that could wield influence over what ideas got to see the light of day more effectively.

“We have your ideas. We don’t know who you are, and now no one else will either”

While it’s easy (and sometimes fun) to conjure up a picture of some old professor and their intellectual clique keeping out plucky, young, and insightful prospects with the power of discrimination, it is important to not leap to that conclusion immediately. While the faces and ideas within a field might change following the deaths of important figures, that does not necessarily mean the new ideas are closer to to that all-important, capital-T, Truth that we (sometimes) value. The same social pressures, costs, and benefits that applied to the now-dead old guard apply in turn to the new researchers, and new status within a field will not be reaped by rehashing the ideas of the past, even if they’re correct. Old-but-true ideas might be cast aside for the sake of novelty, just as new-but-false ideas might be promulgated. Regardless of the truth value of these ideas, however, the present data does lend a good deal of credence of the notion that science tends to move one funeral at a time. While truth may eventually win out by a gradual process of erosion, it’s important to always bear in mind that the people doing science are still only human, subject to the same biases and social pressures we all are.

References: Azoulay, P., Fons-Rosen, C., & Zivin, J. (2015). Does science advance one funeral at a time? The National Bureau of Economic Research, DOI: 10.3386/w21788


A Dust-Up Over College Majors

One of the latest political soundbites I have seen circling my social media is a comment by Jeb Bush regarding psychology majors and the value their degrees afford them in the job market. This is a rather interesting topic for me for a few reasons, chief among which is, obviously, I’m a psychology major currently engaged in application season. As one of the psych-majoring, job-seeking types, I’ve discussed job prospects with many friends and colleagues from time to time. The general impression I’ve been given up until this week is that – at least for psychology majors with graduate degrees looking to get into academia – the job market is not as bright as one might hope. The typical job search can involve months or years of posting dozens or hundreds of applications to various schools, with many graduates reduced to taking underpaid positions as adjuncts, making barely enough to pay their bills. By contrast, I’ve had friends in other programs tell me about how their undergraduate degree has people metaphorically lining up to give them a job; jobs that would likely pay them a starting salary that could be the same or more than I could demand with a PhD in psychology, even in private industry. Considered along those dimensions, a degree envy can easily arise.

Slanderous rumor incoming in 3…2…

Job envy aside, let’s consider the quote from Jeb:

“Universities ought to have skin in the game. When a student shows up, they ought to say ‘Hey, that psych major deal, that philosophy major thing, that’s great, it’s important to have liberal arts … but realize, you’re going to be working a Chick-fil-A…The number one degree program for students in this country … is psychology. I don’t think we should dictate majors. But I just don’t think people are getting jobs as psych majors. We have huge shortages of electricians, welders, plumbers, information technologists, teachers.”

Others have already weighed in on this comment, noting that, well, it’s not exactly true: psychology is only the second most common major, and most of the people with psychology BAs don’t end up working in fast food. The median starting salary for someone with a BA in psychology is about $38,000 a year, which rises to about $62,000 after 10 years if this data set I’ve been looking at is accurate. So that’s hardly fast food wages, even if it might be underwhelming to some.

However, there are some slightly more charitable ways of interpreting Jeb’s comment (provided one feels charitable towards him, which many do not): perhaps one might consider instead how psychology majors do on the job market relative to other majors. After all, just having a college degree tends to help people find good jobs, relative to having no degree at all. So how do psychology majors do in that data set I just mentioned? Of the 319 majors listed and ranked by mid-career income, psychology can be found in three locations (starting and mid-career salaries, respectively):

  • (138) Industrial Psych – 45/74k
  • (231) Psychology – 38/62k
  • (292) Psychology & Sociology – 35/55k

So, despite its popularity as a major, the salary prospects of psychology majors tend to fall below the mid-point of the expected pay scale. Indeed, the median salary of psychology majors is quite similar to theater (38/59k), creative writing (38/63k), or art history majors (40/64k). Not to belittle any of those majors either, but it is possible that much of the salary benefits from holding these degrees comes from holding a college degree in general, rather than those ones in particular. Nevertheless, these salaries are also fairly comparable to electricians, plumbers, and welders, or at least the estimates Google returns to me; information technologists seem to have better prospects, though (55/84K).

In general, then, the comment by Jeb seems off-base when taken literally: most of the fields he mentions tend to do about as well as psychology majors, and most psychology majors are not making minimum wage. With a charitable interpretation, there is something worth thinking about, though: psychology majors don’t seem to do particularly well when compared with other majors. Indeed, about 25 of the listed majors have median starting salaries at 60k or above; the median of psychology majors after 10 years of working. Now, yes; there is more to a career and an education than what financial payoff one eventually reaps from it (once one pays off the costs of a college degree), but there’s also room to think about how much value the degree you are receiving adds to your life and the lives of others, relative to your options. In fact, I think that concern has a lot to do with why Jeb mentioned the careers he does: it’s not hard to see how the skills a plumber or an electrician learns during their training are applied in their eventual career; the path between a psychology education and a profitable career that provides others with a needed service is not quite as easy to imagine.

It’s down at the bottom; I promise

This brings us to a rather old piece of research regarding psychology majors. Lunneborg & Wilson (1985) sought to answer an interesting question: would psychology majors major in psychology again, if they had to the option to do it all over? Their sample was made up of about 800 psychology majors who had graduated between 1978 and 1982. In addition to the “would you major in psychology again” question, participants were also asked to provide their impressions of the importance of their college experience in general, those of the psychology courses they took in particular, and their satisfaction with the skills they felt they gained through their psychology education on a scale from 0 to 4. Of the 750 participants who responded to the question regarding whether they would major in psychology again, 522 (69%) indicated that they would (conversely, 31% would not). While some other specific numbers are not mentioned (unfortunately), the authors report that those who graduated more recently were more likely to indicate a willingness to major in psychology again, relative to those who had been graduates for a longer period of time, perhaps suggesting that some realities of the job market have yet to set in.

More precise numbers are provided, however, to the questions concerning the perceived value of the psychology major. Psychology degrees were rated as most relevant to personal growth (M = 2.38) and liberal arts education (M = 2.23), but less so for graduate preparation in psychology (M = 1.78), graduate preparation outside of psychology (M = 1.61), or career preparation in general (M = 1.49). While people seemed to find psychology interesting, they were more ambivalent about whether they were walking away from their education with career-relevant skills. Perhaps it is not surprising, then, that those who continued on with their education (at the MA level or above) were more satisfied with their education than those who stopped at the BA, as it is at these higher levels that skill sets begin to be explored and applied more fully. Indeed, the authors also report that those who said they were working in a field related to psychology or in their desired field of work were more likely to indicate they would major in psychology again. When things worked out, people seemed inclined to repeat their choices; when their results were less fortunate, people were more inclined to make different choices.

As I mentioned before, making money is not the only reason people seek out certain degrees.This is backed up by the fact that a full 43% of respondents who rated their career preparation from a psychology major as a 0 said they would major in the field again (as compared with 100% who rated career preparation as a 3). People find learning about how people think interesting – I certainly know I do – and so they are often naturally drawn to such courses. It’s a bit more engaging for most to hear about the decisions people make than it is to learn about, say, abstract calculus That psychology is interesting to people is a good consolation for its students, considering that psychology majors are among the most likely to be unemployed, relative to other college majors, and the earning premium of their degrees is also among the lowest.

“…But not at Chik-fil-A; I have standards, after all”

Returning to Jeb’s comment one last time, we can see some truth to what he said if we consider the heart of the matter, rather than the specifics: psychology majors do not have outstanding job prospects relative to other college majors – whether in terms of expected income or employment more generally – and psychology majors also report career preparation as being one of the least relevant things about their education, at least at the BA level. It would seem that many psychology majors are getting jobs not necessarily because of their psychology major, but perhaps because they had a major. Some degree is better than no degree in the job market; a fact that many non-degree holders are painfully aware of. Despite their career prospects, psychology remains the second most popular major in the US, attesting to people’s interest in hearing about the subject. On the other hand, if we consider the specifics, Jeb’s comment is wrong: psychology majors are not working fast food jobs, their income is about on the same level of the other careers he lists, and psychology is the second most – not first – popular major. Which parts of all this information sound most relevant likely depends on your position relative to them: most psychology majors do not enjoy having their field of study denigrated, as the reaction to Jeb’s comments showed; his political opponents likely want to have this comment do the most amount of harm to Jeb as possible (and a heavy degree of overlap likely exists between these two groups). Nevertheless, there are some realities of people’s degrees that ought to not get lost in the defense against comments like these.

References: Lunneborg, P. & Wilson, V. (1985). Would you major in psychology again? Teaching of Psychology, 1, 17-20.

The Drug Addictions Of Mice And Men

In my post, I mentioned the very real possibility that as people’s personal biases make their way into research, the quality of that research might begin to decline. More generally, I believe that such an issue arises because of what the interpretation of some results says about the association value of particular groups or individuals. After all, if I believed (correctly) that people like you tend to be more or less [cooperative/aggressive/intelligent/promiscuous/etc] than others, it would be a fairly rational strategy for me to adjust my behavior around you accordingly if I had no information about you other than that piece of information. Anyone who has feared being mugged by a group of adolescent males at night and not feared being mugged by a group of children at a playground during the day understand this point intuitively. As a result, some people might – intentionally or not – game their research towards obtaining certain patterns of results that reflect positively or negative on other groups or, as in today’s case, highlight some research other people as being particularly important because it encourages us to treat others a certain way. So let’s talk about giving drugs to mice and men.

Way to set a positive example for the kids, Mickey

The article which inspired this post was written by Johann Hari, and the message of it is that the likely cause of drug addiction (and, perhaps, other addictions as well) is that people fail to bond with other humans, bonding instead with drugs. This is, according to Johann, quite a distinct explanation than the one that many people favor: that some chemical hooks in the drugs alter our brains in such a way as to make us crave them. To make this point, Johann highlights the importance of the Rat Park experiment, in which rats placed in enriched environments failed to develop addictions to opiates (which were placed in one of their water bottles), whereas rats placed in isolated and stressful environments tended to develop the addictions to the drugs readily. However, when the isolated rats were moved into the enriched environments, their preference for the drug all but vanished.

The conclusion drawn from this research is that the rats – and, by extension, humans – only really use drugs when their environments are harsh. One quote that really drew my attention was the following passage:

“A heroin addict has bonded with heroin because she couldn’t bond as fully with anything else. So the opposite of addiction is not sobriety. It is human connection.”

I find this interpretation to be incomplete and stated far too boldly. One rather troublesome thorn for this explanation rears its head only a few passages later, when Johann is discussing how the nicotine patch does not help most smokers successfully quit; about 18% is the quoted percentage of those who quit through the patch, though that percentage is not properly sourced. From the gallup poll data I dug up, we can see that approximately 5% of those who have quit smoking attribute their success to the patch. That seems like a low number, and one that doesn’t quite fit with the chemical hook hypothesis. Another number sticks out, though: the number of people who attribute their success in quitting to support from friends and family. If Johann’s hypothesis is correct and people are like isolated rats in a cage when addicted, we might expect the number who quit successfully through social support to be quite high. If addiction is the opposite of human connection, as human connections increase, addiction should drop. Unfortunately for his hypothesis, only 6% of ex-smokers attribute their success to those social factors. By contrast, about 50% of the ex-smokers cited just deciding it was time and quitting cold turkey as their preferred method. Now it’s possible that they’re incorrect – that has been known to happen when you ask people to introspect – but I don’t see any reason to assume they are incorrect by default. In fact, many of the habitual smokers I’ve known did not seem like people lacking social connections to begin with; smoking was quite the social activity, and many people started smoking because their friends did. That is, they might developed their addiction through building social connections; not through lacking them.

Indeed, his hypothesis is all the stranger when considering the failure of people using the patch to successfully kick their habit. If, as Johann suggested, people are bonding with chemicals instead of people, it would sound as if giving them the chemicals in question should correspondingly cut down on their urge to smoke. That it doesn’t seem to do so very much is rather peculiar, suggesting something is wrong with the patches or the hypothesis. So what’s going on here? Is addiction to cigarettes different than addiction to opiates, explaining the disconnect from the Rat Park results to the cigarette data? That might be one possibility, though there is another: it is possible that, like quite a bit of psychology research, the Rat Park results don’t replicate so nicely.

“Reply still hazy; try controlling for gender and look again”

Petrie (1996) reports on an attempted replications of the rat park style of research that did not quite pan out as one might hope. In the first experiment, two groups of 10 rats were tested. The first group was raised in isolated conditions from weaning (21 days old), in relatively small cages without much to do; the second group was raised collectively in a much larger and more comfortable enclosure. These enclosures both contained food and water dispensers, freely available at all hours. In order to measure how much water was being consumed, each rat was marked for identification, and each trip to the drinking spout triggered a recording device. The weight of the water consumed was automatically recorded after each trip to the spout as well.  The testing began when the rats were 113 days old, lasting about 30 days, at which points the rats were all killed (which I assume is standard operating procedure for this kind of thing).

During that testing period, animals had access to two kinds of water: tap water and the experimental batch. The experimental batch was flavored with a sweetener initially, whereas on later trials various concentrations of morphine were also added to the bottle (in decreasing amounts from 1 mg to 0.125 mg, cutting by half each time). Across every concentration of morphine, the socially-reared rats drank slightly more than their isolated counterparts: at 1 mg of morphine, the average number of grams of experimental fluid consumed daily by the social group was 3.6 to the isolated rats 0; at 0.5 mg of morphine, these numbers were 1.3 and 0.5, respectively; at 0.25 mg morphine, 18.3 and 15.7; at 0.125 mg, 42.8 and 30.2. In a second follow up study without the automated measuring tools, this pattern was reversed, with the isolated rats tending to drink slightly more of the morphine water during 3 of the 4 testing phases (those numbers, in concentrations of morphine as before, with respect to social/isolated rats were: 4.3/0.3; 3.0/9.4; 10.9/17.4; and 33.1/44.4). So the results seems somewhat inconsistent, and the differences aren’t all that large. The differences in these studies did not even come close to the original reports of previous research claiming that the isolated rats drank up to 7-times as much.

To explain at least some of this difference in results, Petrie (1996) notes that some genetic differences might exist between the rat strains utilized between the two. If that was the case, then the implication of that is – like always – that the story is not nearly as simple as “bad environments cause” people to use drugs; there are other factors to think about, which I’ll get to in a moment. Suffice it to say right now that, in humans, it seems clear that recreational drug use is inherently more pleasant for certain people. Petrie (1996) also notes that the rats tended to consume the same absolute amount of morphine during each phrase, regardless of its concentration in the water. The rats seemed to much prefer the sweetened to the tap water by a huge margin when it was just sucrose, but drank less sweetened water when the morphine (or another bitter additive) was added, so it’s likely that the rats did not quite enjoy the taste of the morphine all that much. The author concludes that it’s probable the rats enjoyed the taste of the sugar more than they enjoyed the morphine’s effects.

An affinity many humans seem to share as well

The Petrie (1996) paper and the cigarette data, then, ought to cause us some degree of pause when assessing the truth value of Johann’s claims concerning the roots of addiction. Also worrying is the moralization that Johann engages in when he writes the following:

“The rise of addiction is a symptom of a deeper sickness in the way we live — constantly directing our gaze towards the next shiny object we should buy, rather than the human beings all around us.”

This hypothesis of his seems to strike me as the strangest of all. He is suggesting, if I am understanding him correctly, that people (a) find human connections more pleasurable than material items or drugs, like rats, but (b) voluntarily forgo human connections in the pursuit of things that bring us less pleasure. That is akin to finding, in terms of our rats, that the rats enjoy the taste of the sweetened water more than the bitter water, but choose to regularly drink out of the bitter one instead, despite both options being available. It would be a very odd psychology that generates that kind of behavior. It would be the same kind of psychology that would drive rats in the enriched cages to leave them for the isolated morphine cages if given the choice; the very thing Johann is claiming would not, and does not, happen. It would require that some other force – presumably some vague and unverifiable entity, like “society” – is pushing people to make a choice they otherwise would not (which, presumably, we must change to be better off).

This moralization is worrying because it sheds some light on the motivation of the author: it is possible that evidence is being selectively interpreted so as to fit with a particular world view that has social implications for others. For instance, the failure to replicate I discussed in not new; it was published in 1996. Did Johann not have access to this data? Did he not know about it? Was it just ignored? I can’t say, but none of those explanations paint a flattering picture of someone who claims expertise in the area. When the reputations of others are on the line, truth can often be compromised in the service of furthering a social agenda; this could include people stopping the search for contrary evidence, ignoring it, or downplaying its importance.

A more profitable route research might take would be to begin by considering what adaptive function the cognitive systems underlying drug use might serve. By understanding that function, we can make some insightful predictions. To attempt and do so, let’s start by asking the question, “why don’t people use drugs more regularly?” Why do so many smokers wish to stop smoking? Why do many people tend to restrict most of their drinking to the weekends? The most obvious answer to these questions is that drinking and smoking entail some costs to be suffered at a later date, whether those costs be tomorrow (in the form of a hangover) or years from now (in the form of lung cancer and liver damage). Most of the people who wanted to quit smoking, for instance, cited health concerns as their reasons. In other words, people don’t engage in these behaviors more often because there are trade-offs to be made between the present and future. The short term benefits of smoking need to be measured against their long term costs.

“No thanks; I need all my energy for crack”

It might follow, then, that those who value short term rewards more heavily in general – those who do not view the future as particularly worth investing in – are more likely to use drugs; the type of people who would rather have $5 today instead of $6 tomorrow. They’d probably also be more oriented towards short term sexual relationships, explaining the interesting connection between the two variables. It would also explain other points mentioned in the Johann piece: soldiers in Vietnam using (and then stopping) heroin and hospital patients not suffering from addiction to their painkillers once they leave the hospital. In the former case, soldiers in war time are in environments where their future is less than certain, to say the least. When people are actively trying to kill you, it makes less sense to put off rewards today for rewards tomorrow, since you can’t claim them if you’re dead. In the latter case, people being administered these painkillers are not necessarily short term oriented to begin with. In both cases, the value of pursuing those drugs further once the temporary threat has been neutralized (the war ends/they end their treatment) is deemed to be relatively low, as it was before the threat appeared. They might value those drugs very much when they’re in the situation, but not when the threat subsides.

It would even explain why drug addiction fell when legalization and treatment hit Portugal: throwing people in jail introduces new complications to life that reduce the value of the future (like the difficulty getting a job with a conviction, or the threats posed by other, less-than-pleasant inmates). If instead people are given some stability and resources are channeled to them, this might increase their perceived value of investing in the future versus getting that reward today. It’s not about connecting with other people per se that helps with the addiction, then, as much as it’s about one’s situation can change their valuation of the present as compared with the future.

Such a conclusion might be resisted by some on the grounds that it implies that drug addicts, to some extent, have self-selected into that pattern of behavior – that their preferences and behaviors, in a meaningful sense, are responsible for which “cage” they ended up in, to use the metaphor. Indeed, those preferences might explain both why addicts like drugs and why some might fail to develop deep connections with others. That might not paint the most flattering picture of a group they’re trying to help. However, it would be dangerous to try and treat a very real problem of drug addiction by targeting the wrong factors, similar to how just giving people large sums of windfall money won’t necessarily help them not go broke in the long term.

References: Petrie, B. (1996). Environment is not the most important variable in determining oral morphine consumption in Wistar rats. Psychological Reports, 78, 391-400.

A Great Time For Women In STEM

In last week’s post, I reviewed some evidence that video games do not appear to be doing any damage to women, either in the form of encouraging negative sexism against them or making them internalize any of those ideas themselves. This should be considered good news for the people who have been doing a lot of hand wringing over sexism being encouraged in recreational media, though I suspect many of them will not be overjoyed with the findings. This ironic lack of enthusiasm about such data, where it exists, could be chalked up to those who have a vested social interest – and perhaps even established careers – in the notion that women are being disadvantaged and discriminated against, as victimhood can grant one a paradoxically strong position in the right contexts. Accordingly, a lack of evidence concerning discrimination and sexism would be threatening the goose that lays the golden eggs, so to speak. All the worse for the hand-wringers, some new data has just been published suggesting that women are actually finding themselves advantaged in the realm of obtaining STEM faculty careers.

“There are too many successful women; it’s tanking my dissertation”

The good news will get even worse for those who would, somewhat perversely, prefer to read about data suggesting women are being harmed, despite opposing that outcome: the data upon which those results are based are rather comprehensive. Heading off the standard claims of non-representative samples, the current paper (Williams & Ceci, 2015) presents data from approximately 900 faculty members across 50 states and 371 universities, tested between 5 different experiments and utilizing 20 different sets of materials. Williams & Ceci (2015) even provided an additional incentive ($25) to sample of about 100 subjects to elicit a higher response rate (about 90%) so as to ensure those who responded to their solicitation (about 35%) were deemed unlikely to be different than those who failed to respond.  As far as typical research within psychology goes, this data set represents a truly Herculean feat of collection and validation.

With that piece out of way, let’s consider what the researchers did and what they found. In the first experiment, (N = 363, equally split between men and women), faculty in biology, psychology, economics, and engineering were presented with three candidates to assess for an associate professor position at their university. While the target candidate’s credentials were held constant, their gender was varied via references to them as “he” or “she”. The target was also described on lifestyle variables related to martial and parental status. When compared against an identical candidate, women were favored 67% of time over men, representing a 2-to-1 female advantage; a result which held pretty consistently across types of institutions, sex of participants, lifestyles of the targets, and field of study. Apples-to-apples, women seemed to be heavily favored as job candidates. The only exception to this pattern was that male economists were markedly unbiased with respect to the sex of the applicants. Good for you, male economists; you seem to be a very fair-minded bunch.

When divergent lifestyles of the targets were compared against each other in experiment 2 (N = 144), some changes in that effect were observed. When comparing a married male with a stay-at-home spouse to a divorced mother, both of whom had two young children, female participants preferred the divorced mother 71% of the time, whereas male participants preferred the father 57%. While there was a sex difference in that apples-to-oranges case, when it was a single, childless woman competing against a married father, the woman won 3:1 with male participants and 4:1 with females. This is in line with some recent data concerning young, single women out-earning their male peers.

Nothing will ever beat that lack of baggage

Another sex difference popped up in experiment 3 (N = 204), where a man or woman who took a one year paternity/maternity leave in graduate school was compared against one who did not, though both had children. Whereas the male faculty preferred the woman who took a leave over the one who did not by 2:1, they didn’t seem care to whether the man did. The female participants similarly didn’t care about male paternity leave, though they favored the woman who continued to work about 2:1 over the woman who took the leave. Maternity leave during graduate school may or may not hurt, then, depending on the sex of the person doing the assessing. For female faculty, it’s a bad thing; for males, it’s good.

In experiment 4, a smaller sample was used (35 engineering professors), but the prospective candidates now came complete with a full CV, rather than just a narrative summary. The same difference in favor of women came out as in study 1 (it was actually slightly more favorable towards the women, though not substantially so), and that’s for engineering; one of the more male-dominated professions out there. Finally, experiment 5 asked participants to evaluate single candidates, rather than choose among three different ones. Now each candidate had to stand on their own merits, removing some potential for socially-desirable comparisons being made, such as a man against a woman. When rating the candidates individually on a 1-10 scale of desirability as a hire, participants (N = 127) gave the identical female a full point higher on the scale, relative to the male (8.2 and 7.1, respectively). They really seemed to like the women more.

Not to put too fine of a point on it, but if these preferences were found to run against women, rather than in favor of them, I can only imagine the hordes of people who would be tripping over each other to be the most offended and outraged by them. As it stands, the authors’ conclusions that relatively low female representation in some fields is likely a product of women applying for them in fewer numbers, rather than any bias against women in the hiring process, seems reasonable. In fact, to the extent that women are being told that these areas are biased against them (when the opposite is true), the representation gap might even be encouraged, since no one wants to apply to work in a field they think will be hostile to them. So, if you’re a woman looking to get into the STEM fields, now might be a good time to try.

As for me, it’s back to the applications. By the way, got any change?

Now the reasons this bias in favor of women exists is a matter for speculation. My immediate guess on the matter would be that faculty seem to favor good female candidates over equally-good males because they are trying to, for lack of a better word, look good. Many people seem to have truly embraced the idea of diversity (inasmuch as things like sex/race per se make people more diverse in meaningful ways), and want to come off as accepting and tolerant: their only concern is getting more diverse people to apply. This speculation would hold at least as much as including more women is concerned; I don’t know that fields in which women dominate are actively looking to recruit more men to diversify the place up. They might be, but I don’t know of them.

Will this finding be tolerated or embraced by certain vocal subgroups of people who want to see sexism against women ended? I suspect not. Instead, I imagine this data will be treated the same way some previous data about traffic stops was: even when they find a female advantage, they will continue to dig for specific cases in which women were disadvantaged. They already have their conclusion – women are discriminated against – they just have to find the evidence. As it turns out, that last part can be tricky.

References: Williams, W. & Ceci, S. (2015). National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track. Proceedings of the National Academy of Sciences

Charitable Interpretations Were Never My Strong Suit

Never attribute to malice what is adequately explained by stupidity – Halon’s Razor

Disagreement and dispute are pervasive parts of human life, arising for a number of reasons. As Halon’s razor suggests, the charitable response to disagreement would be to just call someone stupid for disagreeing, rather than evil. Thankfully, these are not either/or types of aspersion we can cast, and we’re free to consider those who disagree with us both stupid and evil if we so desire. Being the occasional participant in discussions – both in the academic and online worlds – I’m no stranger to either of those labels. The question of the accuracy of the aspersions remains, however: calling someone ignorant or evil could serve the function of spreading accurate information; then again, it could also serve the function of persuading others to not listen to what the target has to say.

“The other side doesn’t have the best interests of the Empire in mind like I do”

When persuasion gets involved, we are entering the realm where perceptions can be inaccurate, yet still be adaptive. Usually being wrong about the world carries costs, as incorrect information yields worse decision making. Believing inaccurately that cigarettes don’t increase the probability of developing lung cancer will not alter the probability of developing a tumor after picking up a pack-a-day habit. If, however, my beliefs can cause other people to behave differently, then I could do myself some good and being wrong isn’t quite as bad. For instance, even if my motives in a debate are purely and ruthlessly selfish, I might be able to persuade other people to support my side anyway through both (1) suggesting that my point of view is not being driven by my underlying biases – but rather by the facts of the matter and my altruistic tendencies – and (2) that my opponent’s perspective is not to be trusted (usually for the opposite set of reasons). The explanation for why people frequently accuse others of not understanding their perspective, or of sporting particular sets of biases, in debates, then, might have little to do with accuracy and more to do with convincing other people to not listen; to the extent that they happen to be accurate might be more be accidental than anything.

One example I discussed last year concerned the curious case of Kim Kardashian. Kim had donated 10% of some eBay sales to disaster relief, prompting many people to deride Kim’s behavior as selfishly motivated (even evil) and, in turn, also suggest that her donation be refused by aid organizations or the people in need themselves. It seemed to me that people were more interested in condemning Kim because they had something against her in particular, rather than because any of what she did was traditionally wrong or otherwise evil. It also seemed to me that, putting it lightly, Kim’s detractors might have been exaggerating her predilection towards evil by just a little bit. Maybe they were completely accurate – it’s possible, I suppose – it just didn’t seem particularly likely, especially given that many of the people condemning her probably knew very little about Kim on a personal level. If you want to watch other people make uncharitable interpretations of other people’s motives, I would encourage you to go observe a debate between people passionately arguing over an issue you couldn’t care less about. If you do, I suspect you will be struck by a sense that both sides of the dispute are, at least occasionally, being a little less than accurate when it comes to pinning motives and views on the other.

Alternatively, you could just observe the opposite side of a dispute you actually are invested in; chances are you will see your detractors as being dishonest and malicious, at least if the results obtained by Reeder et al (2005) are generalizable. In their paper, the researchers sought to examine whether one’s own stance on an issue tended to color their perceptions about the opposition’s motives. In their first study, Reeder et al (2005) posed about 100 American undergradutes with a survey asking them both about their perceptions of the US war in Iraq (concerning matters such as what motivated Bush to undertake the conflict and how likely particular motives were to be part of that reason), as well as whether they supported the war personally and what their political affiliation was. How charitable were the undergraduates when it came to assessing the motives for other people’s behavior?

“Don’t spend it all in one place”

The answer, predictably, tended to hinge on whether or not the participant favored the war themselves. In the open-ended responses, the two most common motives listed for going to war were self-defense and bringing benefits to the Iraqi people, freeing it from a dictatorship; the next two most common reasons were proactive aggression and hidden motives (like trying and take US citizen’s minds off other issues, like the economy). Among those who favored the war, 73% listed self-defense as a motive for the war, compared to just 39% who opposed it; conversely, proactive aggression was listed by 30% of those who supported the war, relative to 73% of those who oppose it. The findings were similar for ratings of self-serving motives: on a 1-7 scale (from being motivated by ethical principles to selfishness), those in favor of the war gave Bush a mean of 2.81; those opposed to the war gave him a 6.07. It’s worth noting at this point that (assuming the scale is, in fact measuring two opposite ends of a spectrum) both groups cannot be accurate in their perceptions of Bush’s motives. Given that those not either opposed to or supportive of the war tended to fall in between those two groups in their attributions of motives, it is also possible that both sides could well be wrong.

Interestingly – though not surprisingly – political affiliation per se did not have much predictive value for determining what people thought of Bush’s motives for the war when one’s own support for the war was entered into a regression model with it. What predicted people’s motive attributions was largely their own view about the war. In other words, Republicans who opposed to the war tended to view Bush largely the same as Democrats opposed to the war, just as Democrats supportive of the war viewed Bush the same as Republicans in favor of it. Reeder et al (2005) subsequently replicated these findings in a sample of Candian undergraduates who, at the time, were far less supportive of the war on the whole than the American sample. Additionally, this pattern of results was also replicated when asking about the motives of other people who support/oppose the war, rather than asking about Bush specifically. Further, when issues other than the war (in this case, abortion and gay marriage) were used, the same pattern of results obtained. In general, opposing an issue made those who support it look more self-serving and biased, and vice versa.

The last set of findings – concerning abortion and gay marriage – was particularly noteworthy because of an addition to the survey: a measure of personal involvement in the issue. Rather than just being asked about whether they support or oppose one side of the issue, they were also asked about how important the issue was to them and how likely they were to change their mind about their stance. As one might expect, this tendency to see your opposition as selfish, biased, close-minded, and ignorant was magnified by the extent to which one found the issue personally important. Though I can’t say for certain, I would venture a guess that, in general, the importance of an issue to me is fairly uncorrelated with how much other people know about it. In fact, if these judgments of other people’s motives and knowledge were driven by the facts of the matter, then the authors should not have observed this effect of issue importance. That line of reasoning, again, suggests that these perceptions are probably aimed more at persuasion than accuracy. The extent to which they’re accurate is likely besides the point.

“Damn it all; I was aiming for the man”

While I find this research interesting, I do wish that it had been grounded in the theory I had initially mentioned, concerning persuasion and accuracy. Instead, Reeder et al (2005) ground their account in naive realism, the tenets of which seem to be (roughly) that (a) people believe they are objective observers and (b) that other objective observers will see the world as they do, so (c) anyone who doesn’t agree must be ignorant or biased. Naive realism looks more like a description of results they found, rather than an explanation for them. In the interests of completeness, the authors also ground their research in self-categorization theory, which states that people seek to differentiate their group from other groups in terms of values, with the goal of making their own group look better. Again, this sounds like a description of behavior, rather than an explanation for it. As the authors don’t seem to share my taste for a particular type of theoretical explanation grounded in considerations of evolutionary function here (at least in terms of what they wrote), I am forced to conclude that they’re at least ignorant, if not downright evil*

References: Reeder, G., Pryor, J., Wohl, M., & Griswell, M. (2005). On attributing negative motives to others who disagree with our opinions. Personality & Social Psychology Bulletin, 31, 1498-1510.

*Not really


Biases Of Boys Or Girls Being Coy?

When it comes to understanding a lot of behavior in sexually-reproducing species, a key variable to consider is differential reproductive potential: what the theoretical upper-limit on reproduction happens to be for each sex. The typical mammalian pattern is such that males tend to have much higher potential reproductive ceilings, owing to how the process of internal fertilization works. When females become pregnant, their reproductive potential is essentially turned off until sometime after the infant is born, as pregnancy and lactation typically disable ovulation. Males, on the other hand, can reproduce about as often as they have an available female. In humans, this can translate into a woman having a child about every three years, whereas a man could – at least in theory – fertilize around 1,000 women in that three-year period if they managed one a day. In terms of realities, though these limits are rarely reached, the most prolific mother on record had around 70 children, as she had a knack for birthing twins and triplets; the most prolific father sired closer to 900 offspring.

“And I suppose you all want to go to college now too, huh?”

Given that males and females face different cost/benefit ratios when it comes to sex, we should expect that male and female psychology looks somewhat different as well, as each sex has had different problems to solve in that domain. One such problem is detecting sexual interest. Perceptions are not perfect, so the degree of sexual interest that another person has in us can only be estimated from behavioral and verbal cues. Accordingly, it follows that people might make mistakes in perceptions: we might see sexual interest where none exists, or fail to see sexual interest that does exist. For women, failing to perceive sexual interest when it is present would be less of a bad thing than it would be for men, as the costs of missing a sexual encounter are generally higher for males. Given the very real reproductive consequences to making these perceptual errors, we should expect some cognitive systems in place designed to manage our errors.

This brings us to the matter of how these errors might be managed. According to one popular view, men manage their errors by over-perceiving sexual interest in women. That is to say that men are likely to perceive interest to be there when, in many cases, it isn’t (e.g., “she touched my arm; she must be interested in having sex with me”). This explanation, while plausible sounding on the face of it (as it does help minimize the chances of missing a potential encounter), does suffer a theoretical weakness: it assumes that men’s perceptions should be inaccurate. That is to say that it posits that men’s cognitive systems overestimate how interest women actually are. The reason this is a problem is that, all else being equal, accurate perceptions tend to lead to better outcomes than inaccurate ones. If, for example, you’re overly-optimistic about your chances of landing that career in your dream field, you might spend an inordinate amount of time pursuing that goal – which you won’t obtain – when you could instead be using that time and energy to pursue outcomes with a higher expected payoff. Put more simply, your sincerely-held belief that you are likely to win the lottery will lead you to wasting more money on lottery tickets than you otherwise should. The same logic holds when it comes to perceiving sexual interest: if you see interest where it doesn’t exist, you’re likely to spend excessive amounts of time and energy pursuing dead ends. It seems that an altogether better system for men would be one that detected women’s interest as accurately as possible, but decided to pursue low-probability outcomes on some occasions anyway owing to their high reward. This system would maximize expected rewards.

Empirically, however, men do seem to over-perceive women’s sexual interest; that’s been the conclusion from past research, anyway. Specifically, if you have a man and a woman interacting, the man will tend to perceive that the woman has more sexual interest in him than she reports that she does. The explanation that men are over-perceiving only works, though, if one assumes that women’s reports are entirely accurate; if women are actually under-reporting their interest – either knowingly or not – then the gap between men and women’s reports might be more readily explainable. The idea that the women are under-reporting has some conceptual merit as well: it is possible that women’s reports underestimate their actual amount of interest as a form of reputation management, since there are consequences to sending signals of promiscuity.

 ”Figure A: just really good friends. Nothing to see here.”

Towards attempting to figure out whether this gap in reports is due to male overperception, female underreporting, or some combination thereof, Perilloux & Kurzban (2014) began by presenting a list of 15 behaviors to roughly 500 men and women. The male subjects were asked to estimate a woman’s sexual intentions if she had engaged in the behaviors; the female subjects were asked to estimate their own sexual intentions, given that they had engaged in the behaviors. This resulted in each of the 15 behaviors getting a mean rating from each sex that could range from -3 (extremely unlikely to indicate a desire to have sex) to 3 (extremely likely). As usual, the difference in reports emerged: men’s composite average collapsed across the 15 behaviors was 1.44, whereas women’s was 0.77. So men were perceiving more interest than women were reporting.

The author’s second study sought to examine whether these reports were consciously being over- or under-estimated by the subjects. In order to do so, another 500 subjects were recruited and given the same 15-item survey and asked to estimate how much each behavior indicated a desire for sex, given that a woman had performed it. However, half the subjects were told that, in addition to their payment for the experiment, they could earn some additional money for more accurate reporting (i.e., estimating the accuracy of sexual intentions within a certain margin of error). In this second study, the gap showed up as it did before: men reported an average of 1.47, whereas women reported an average of 1.14. Compared with first study, the men’s ratings in the second were no different, though the women’s estimates increased significantly. So, when incentivized to be accurate, men’s rating didn’t change, though women’s did and, further, they changed in the direction of being closer to the men’s. One plausible interpretation of the data, as put forth by Perilloux & Kurzban (2014) is that women know that other women will under-report their sexual intentions, just not by how much, the result being that women’s estimates resemble men’s estimates, but still don’t quite match up.

This brings us to the third study. Here, the authors asked another 250 men and women about the same behaviors, but in a slightly different fashion; they now asked what other women would actually intend if they engaged in the behavior, as well what those women would say they intend. In this final condition, the sex difference vanished: when considering what other women actually intend, men’s average (M = 1.91) did not differ statistically from women’s (M = 1.84); when considering what other women would say they intend, the men’s average (M = 1.42) again didn’t differ from the women’s (M = 1.54). A reasonable conclusion from this pattern of data, then, would be to say that both men and women believe that other women will under-report their sexual intentions and, given that the men’s average perceptions remained consistent across studies and women’s continuously shifting in the direction of the men’s average, that the men’s perceptions were probably accurate in the first place.

 ”Alright; I might have underestimated my interest by a little bit…”

Now, again, this is not to say that women are lying about their interest level, as lying implies some knowledge of the truth (though people likely do lie about such things explicitly from time to time); instead, it is probably more often the case that women unconsciously tend to report that they’re less interested in such things, perhaps owing to some kind reputation management. While I suppose it’s not impossible that men have biased views of women’s sexual interest and women have similarly-incorrect views, but for different reasons, it doesn’t seem particularly probable. It is worth noting that a similar pattern of results to the present studies turned up in an informal one I covered some time ago concerning whether men and women can “just be friends”. The gist of that informal study is that men tended to agree that men almost always have an ulterior sexual motive for befriending women. Women disagreed with these assessments, stating that men and women could just be friends; they disagreed, that is, until the idea of their male partner being “just friends” with another woman was brought into question. In that latter instance, women tended to agree with men concerning the sexual interest in such relationships. The present studies would seem to suggest that this isn’t just due to clever video editing. What it doesn’t show is that men are wrong in their perceptions.

References: Perilloux, C. & Kurzban, R. (2014). Do men overperceive women’s sexual interest? Psychological Science, DOI: 10.1177/0956797614555727

Are Consequences Of No Consequence? (Part 2)

In my last post, I discussed the matter of nonconsequentialism: the idea that, when determining the moral value of an action, the consequences of that action are, in some sense, besides the point; instead, some acts are just wrong regardless of their consequences. The thrust of my argument there was that those arguing that moral cognitions are nonconsequentialist in nature seem to have a rather restricted view of precisely how consequences should matter. Typically, that view consists of whether aggregate welfare was increased or decreased by the act in question. My argument was that we need to consider other factors, such as the distribution of those welfare gains and losses. Today, I want to expand on that point a bit by quickly considering three other papers examining how people respond to moral violations.

Turning the other cheek when you’re being hit helps to even out the scars

The first of these papers comes from DeScioli, Gilbert, & Kurzban, (2012), and it examines people’s perceptions of victims in response to moral transgressions. Their research question concerns the temporal ordering of things: do people need to first perceive a victim in order to perceive an immoral behavior, or do people perceive an immoral behavior and then look for potential victims? If the former idea is true, then people should not rate acts without apparent victims as wrong; if the latter is true, then people might be inclined to, essentially, invent victims (i.e. mentally represent them) when none are readily available. There is, of course, another way people might see things if they were nonconsequentialists: they might perceive an act as wrong without representing a victim. After all, if negative consequences from an act aren’t necessary for perceiving something as wrong, then there would be no need to perceive a victim.

To test these competing alternatives, DeScioli, Gilbert, & Kurzban (2012) presented 65 subjects with a number of ostensibly “victimless” offenses (including things like suicide, grave desecration, prostitution, and mutually-consensual incest). The results showed that when people perceived an act as wrong, they represented a victim for that act around 90% of the time; when acts were perceived to not be wrong, victims were only represented 15% of the time. While it’s true enough that many of the victims people nominated – like “society” or “the self” – were vague or unverifiable, the fact remains that they did represent victims. From a nonconsequentialist standpoint, representing ambiguous or unverifiable victims seems like a rather peculiar thing to do; better to just call the act wrong regardless of what welfare implications it might have. The authors suggest that a possible function of such victim representation would be to recruit other people to the side of the condemners but, absent the additional argument that people respond to consequences suffered by victims (i.e., that people are consequentialists), this explanation would be incompatible with the nonconsequentialist view.

The next paper I wanted to review comes from Trafimow & Ishikawa (2012). This paper is a direct follow-up from the paper I discussed in my last post. In this paper, the authors were examining what kind of attributions people made about others who lied: specifically, whether people who lied were judged to be honest or dishonest. Now this sounds like a fairly straight-forward kind of question: someone who lies should, by definition, be rated as dishonest, yet that’s not quite what ended up happening. In this experiment, 151 subjects were given one of four stories in which someone either did or not lie. When the story did not represent any reason for the honesty or dishonesty, those who lied were rated as relatively dishonest, whereas those who told the truth were rated as relatively honest, as one might expect. However, there was a second condition in which a reason for the lie was provided: the person was lying to help someone else. In this case, if the person told the truth, that someone else would suffer a cost. Here, an interesting effect emerged: in terms of their rated honesty, the liars who were helping someone else were rated as being as honest as those who told the truth and harmed someone else because of it.

“I only lied to make my girlfriend better off…”

 In the words of the authors, “participants who lie when lying helps another person are absolved, whereas truth tellers do not get credit for telling the truth when a lie would have helped another person“. Now, in the interests of beating this point to death, a nonconsequentialist moral psychology should not be expected to generate that output, as that output is contingent on consequences. Despite that, honesty which harmed was no different than dishonesty which helped. Nevertheless, these judgments were ostensibly about honesty – not morality – so that lying and truth-telling were rated comparably does require some explanation.

While I can’t say for certain what that explanation is, my suspicion is that the mind typically represents some acts – like lying – as wrong because, historically, they did tend to reliably carry costs. In this case, the cost is that behaving on the basis of incorrect information typically leads to worse fitness outcomes than behaving on the basis of accurate information; conversely, receiving new, true information can help improve decision making. As people want to condemn those who inflict costs, they typically represent lying as wrong and those whom people want to condemn because of their lying are labeled dishonest. In other words, “dishonest” doesn’t refer to someone who fails to tell the truth so much as it refers to someone people wish to condemn for failing to tell the truth. However, when considering a context in which lying provides benefits, people don’t wish to condemn the liars – at least not as strongly – so they don’t use the label. Similarly, they don’t want to praise truth-tellers who harm others, and so avoid the honest label as well. While necessarily speculative, my analysis is also ruthlessly consequentialist, as any strategic explanation would need to be.

The final paper I wanted to discuss can be discussed quickly. In this last paper, Reeder et al (2002) examined the matter of whether situational characteristics can make morally unacceptable acts more acceptable. These immoral acts included driving cleat spikes into a player during a sports game, administering a shock to another person, or shaking someone off a ladder. The short version of the results is that when the person being harmed  previously instigated in some way – either through insults or previous physical harm – it became more acceptable (though not necessarily very acceptable) to harm them. However, when someone harmed another person for their own financial gain, it was rated as less acceptable regardless of the size of that gain. At the risk of not saying this enough, a nonconsequentialist moral psychology should output the decision that harming people is equally wrong regardless of what they might or might not have done to you beforehand because, well, it’s only attending to the acts in question; not their precursors or consequences.

I could have sworn I just saw it move…

Now, as I mentioned above, people will tend to represent lying as morally wrong across a wide range of scenarios because lying tends to inflict costs. The frequency with which people do that could provide the facade of moral nonconsequentialism. However, even in cases where lying is benefiting one person, as in Trafimow & Ishikawa (2012), it is likely harming another. To the extent that people don’t tend to advocate for harming others, they would rather that one both (a) avoid the costs inflicted by truth-telling and (b) avoid the costs inflicted by lying. This is likely why some Kantians (from what I have seen) seem to advocate for simply failing to provide a response in certain moral dilemmas, rather than to lie, as the morally acceptable (though not necessarily desirable) option. That said, even the Kantians seem to respond to the consequences of the actions by in large; if they didn’t, they wouldn’t see any dilemma when it came to lying about Jews in the attic to Nazis during the 1940s which, as far as I can tell, they seem to. Then again, I don’t suppose many people see lying to Nazis to save lives as much of a dilemma; I imagine that has something to do with the consequences…

References: Descioli, P., Gilbert, S., & Kurzban, R. (2012). Indelible victims and persistent punishers in moral cognition. Psychological Inquiry, 23, 143-149.

Reeder, G., Kumar, S., Hesson-McInnis, M., & Trafimow, D. (2002). Inferences about the morality of an aggressor: The role of perceived motive. Journal of Personality & Social Psychology, 83, 789-803.

Trafimow, D. & Ishikawa, Y. (2012). When violations of perfect duties do not cause strong trait attributions. The American Journal of Psychology, 125, 51-60.

Honorable Violence

I’m usually not one for TV. Though I rarely find myself watching TV shows, one of the two or three exceptions to this pattern in recent year has been Game of Thrones. It’s a show replete with instances of violence, many of which are rather interesting on top of being entertaining. There are two such instances I have in mind which struck me as particularly memorable: The first scene involves a fight been Jaime Lannister and Eddard Stark. The two fight each other while Jaime’s men stand back and observe. The fight is cut short, however, when one of Jaime’s men suddenly stabs a spear through Eddard’s leg, crippling him. Following that, Jaime approaches his ally who had disabled his opponent and knocks the man out for the assistance before riding off, leaving Eddard alive. While the scene is intuitively understandable to most viewers, if this were any other species being observed, we might find the behaviors all rather peculiar. Why did Jaime’s allies not step in to assist him sooner? Why did Jaime aggress against his own ally for helping him win a potentially-lethal fight? Why did Jaime not kill his opponent when he had the chance?

And, more importantly, why doesn’t my hair look that good?

The second scene I had in mind involves the fight between Bronn and Ser Vardis Egan. The two are engaged in a trial by combat, where the outcome of the fight will, as the name suggests, determine the guilt (and life) of Jaime’s brother, Tyrion, as well as the life of one of the fighters. Bronn, in this case, happens to be fighting on behalf of Tyrion. Bronn manages to win the fight by tiring his armor-clad opponent out, before crippling and killing him. Despite Vardis being surrounded by his allies, none of them join in the fight to help save his life. After Bronn wins, he is rebuked for his victory by the Queen of the Vale, who suggests that Bronn “doesn’t fight with honor”. Bronn responds with one of my favorite lines from the series: “No”, which he follows up by gesturing down the pit where his recently-defeated opponent was dropped, and continues, “He did”. Honor, it seems, can get a man killed, so why do people fight with or make a big deal about it?

While there are many other such instances of violence in Game of Thrones we might document, I wanted to use these as a vehicle to discuss a recent paper by Romero, Pham, & Goetz (2014) which examined people’s implicit rules of combat, specifically the acceptability of violent tactics. In their paper, the authors posit that a recurrent history of violence over human evolution – in conjunction with the potential costs associated with violence – likely had a hand in shaping human psychological mechanisms for learning about and responding to violence. In this case, prepared learning would be the order of the day. That is to say that violent conflict, owing to its risky nature, is often not the type of thing that lends itself neatly to repeated trial and error learning, so it is probable that human psychology comes equipped with cognitive mechanisms for structuring our understanding of violent conflict with very little or, in some cases, the absence of, explicit learning about or experience with such topics. This is similar to how monkeys can learn quickly to fear snakes but not flowers.

As people would want to avoid needlessly escalating play fights into lethal combat or fail to respond appropriately to others with lethal intent, we should expect (a) that the mind is structured to both classify violent acts into relatively distinct groups, and  (b) that each group should have types of violence which are deemed acceptable or unacceptable in association with each context. The regulations on what kind of violence is associated with acceptability in each group should also correlate with the functions of violence in those contexts. Though the authors note that such a list is unlikely to be exhaustive, they highlight four groups into which violent conflict might classified: (1) play fighting, the purpose of which might be to prepare one for actual fighting later, but not to cause lasting injury, (2) status contests, the purpose of which is to establish dominance, but not to kill one’s rival, (3) warfare, the purpose of which is to settle between-group rivalries which may occasionally require lethal force, and (4) anti-exploitative violence, covering what people typically consider self-defense which, again, might occasionally require lethal force.

Not included in the list: Senseless and Leisurely violence

In the first study, 237 US students read about a scenario designed to capture each of these four contexts, and were then asked to rate the acceptability of 22 violent acts in each context. The violent acts ranged from minor (slapping and punching the body) to rather severe (eye gouging and the use of a deadly weapon), as confirmed by independent ratings. The authors predicted that more severe violent acts would increase in acceptability from play fighting, to status contexts, to warfare, to anti-exploitation contexts, owing to the function of violence in each context shaping the expectations of behavior in them. This was indeed the pattern that was uncovered: as violent tactics got more severe, endorsements of them as acceptable tended to decrease. The number of acceptable tactics was also lowest for play fighting, followed by status contests, warfare, and anti-exploitation tactics. Interestingly, experience learning about and discussing combat behavior did not have a significant effect on people’s judgments of an act’s acceptability. In the non-US sample of 91 Mturkers, that last result concerning learning was significant but, controlling for the level of experience, the above pattern generally replicated (though the mean values of acceptability were slightly different, with the non-US sample rating violence as generally more acceptable).

The second study really caught my attention, however, and it gets back to our initial Game of Thrones scenes. This study examined status contests specifically. The function of these status contests, as previous mentioned, is to establish dominance. They can also serve a signaling function by allowing others the chance to assess who is the better fighter within the boundaries of these implicit rules of combat. Individuals who break these implicit rules, then, end up obscuring the assessment of their relative fighting ability by granting themselves competitive advantages. To use a simple, yet extreme example, if you’re attempting to assess fighting ability in a boxing match and one boxer shows up to the fight with a knife, you learn less about each person’s relative boxing ability. Other types of “cheap shots” (i.e. violent tactics that extend beyond the normally accepted range for the type of conflict) similarly hinder honest information from being gathered from the results of the contest. This could be why it seemed so comprehensible to viewers for Bronn to be accused of lacking honor (as he was wearing his opponent out rather than fighting him directly) or why Jaime didn’t continue to fight Eddard after the spear through the leg (as winning no longer proves anything about the combatant’s skill).

The second study asked 234 participants to read the status contest story from the first study, but now one of the fighters employed tactics rated as overly-severe during that contest, including: bringing allies into combat, striking the testicles of the other fighter, or using a weapon. Participants also read about three novel contexts, in which one fighter handicapped himself, either by fighting with only one hand, fighting when outnumbered, or fighting a particularly formidable opponent. When the fighter in question employed one of the more severe tactics during the fight, they were assigned less respect than those handicapping themselves. On a 10-point scale, bringing allies to fight was rated as a mean of 2.58 in respect, striking the testicles was a 2.68, and using a weapon was rated as 2.14. By contrast, fighting with one arm was rated a 7.23, fighting outnumbered was 6.41, and fighting a formidable opponent was a 5.88. On the whole, men also tended to give somewhat more respect to fighters handicapping themselves, relative to women, perhaps owing to men generally being the ones doing the fighting.

Seems like a fair assessment of fighting ability

Now, as the authors note, the rules governing the classification of violent acts should be expected to vary somewhat from culture to culture. For some cultures, status contests might take the form of wrestling; in others, like the Game of Thrones world, status contests might sometimes involve the use of lethal weaponry. Some blurring of these categories, then, is a likely outcome at times. When duels are held with lethal weaponry, for instance, they might serve dual functions of status contests and anti-exploitation tactics, making classifications difficult. Despite that occasional blurring, however, we should expect people to have different expectations of the acceptable rules of violence contingent on its function. When the function is only to display status, grievous bodily injury should be relatively unacceptable; when the function is play fighting, even mild injuries should be frowned upon.

To turn this finding towards some personal experience, these results shed some light on the frequent complaint of many gamers concerning “honor” in their games, whether certain tactics are “cheap”, and whether certain things in the game need to be “nerfed” (i.e. have their power reduced). These complaints likely arise from a combination of the two aforementioned issues: first, if one player is using some severely-unbalanced or powerful item, the ability of each player’s skill to determine the result of the match shrinks, relative to the portion determined by the item. In the event that such tactics are too good and easy to pull off, the ability to display skill can be removed entirely. This would make the game less fun for many. Second, people might be treating these games as if they belong in different categories. Some players who play just for fun might treat them as play fights while others treat them as status competitions; still others might even treat them as anti-exploitation contexts. This could result in the different groups having different ideals as to how others should behave in game, and without agreement on the implicit rules resentment between players can build. A similar logic thus likely underlies non-violent status competitions, play, and anti-exploitation tactics as well. It would be interesting to see this line of research expanded into other areas of human competition.

References: Romero, G., Pham, M., & Goetz, A. (2014). The implicit rules of combat. Human Nature, DOI: 10.1007/s12110-014-9214-3


Misinformation About Evolution In Textbooks

Throughout my years making my way through various programs at various schools, I have received (and I say this is the humblest way possible, which appears to be not very…) a number of compliments from others regarding my scholarship. People often seem genuinely impressed that I make the effort to read all the source material I reference and talk about. Indeed, when it came to the class I taught last semester, I did not review any research in class that I had not personally read in full beforehand, frequently more than once. Now, to me, this all seems relatively mundane: I feel that academics should make sure to read all the material they’re using before they use it, and that doing so should be so commonplace that it warrants no special attention. I don’t feel teachers should be teaching others about research they, like Jon Snow, know little or nothing about. Now I have no data regarding how often academics or non-academics do or do not try to teach others about or argue about research they have little personal familiarity with, but, if consulting source material was as common as I would hope, it would seem odd that I received explicit compliments about it on multiple occasions. Compliments are often reserved for special behaviors; not mundane ones.

 ”Thanks, Bob; it was really swell of you to not murder me”

It is for this reason that I have always been at least little skeptical of textbooks in psychology: many of these textbooks cover and attempt to provide some summary of large and diverse areas of research. This poses two very real questions, in my mind: (a) have the authors of these books really read and understood all the literature they are discussing, and (b) provided they have, are they going to be able to provide a summary of it approaching adequate in the space provided?  For instance, one of my undergraduate textbooks – Human Sexuality Today, by Bruce M. King (2005) – contains a reference section boasting about 40 pages, on each of which approximately 60 references are contained. Now perhaps Dr. King is intimately, or at least generally, familiar with all 2,400 references on that list and is able to provide a decent summary of them on the approximately 450 pages of the book; it’s not impossible, certainly.

There are some red flags to me that this is not the case, however. One thing I can now do, having some years of experience under my belt, is return to these books and examine the sections I am familiar with to see how well they’re covered. For instance, on page 254, King (2005) is discussing theories of gender roles. In that section, hes makes reference to two papers by Buss and Geary, but then, rather than discuss those papers, he cites a third paper, by Wood and Eagly, to summarize them. This seems like a rather peculiar choice; a bit like my asking someone else where you said you wanted to go eat when I could just ask you and, in fact, have a written transcript of where you said you wanted to go eat. On page 436, when discussing evolutionary theories of rape, King writes that Thornhill and Palmer’s book suggested that “women can provoke rape” (which the book does not) and that the evolutionary theory “does not explain why men rape children, older women, and other men” (demonstrating their lack of understanding about proximate/ultimate distinction). In fact, King goes on to mention a “thoughtful review” of Thornhill and Palmer’s book that suggests rape might be a byproduct and that “we must not confuse causation with motivation”. Thoughtful indeed. So thoughtful, in fact, that the authors of the book in question not only suggested that rape might be a byproduct, but the pair also take great pains to outline the distinction between proximate and ultimate causes. Issues like these do not appear to be the hallmark of a writer familiar with the topic they are writing about. (I will also note that, during the discussion on the function of masturbation, King writes, “Why do people masturbate? Quite simply, because it feels good” (p.336). I will leave it up to you to decide whether that explanation is particularly satisfying on a functional level).

Now these are only two errors, and I have neither the time nor the patience to sift through the full textbook to look for others, but there’s reason to think that this is by no means an isolated incident. I wrote previously about how evolutionary psychology tends to be misrepresented in introductory psychology textbooks and, when it is mentioned, is often confined to only a select topic or two. These frequent errors are, again, not the hallmarks of people who are terribly familiar with the subjects they are supposed to be educating others about. To the extent that people are being educated by books like these, or using them to educate others, this poses a number of obvious problems concerning the quality of that education, along with a number of questions along the lines of, “why am I trusting you to educate me?” To drive that point home a bit further, today we have another recent paper for consideration by Winegard et al (2014), who examined the representation of evolutionary psychology within 15 popular sex and gender textbooks in psychology and sociology. Since the most common information people seem to hear about evolutionary psychology does concern sex and gender, they might represent a particularly valuable target to examine.

“Don’t worry; I’m sure they’ll nail it”

The authors begin by noting that previous analyzes of evolutionary psychology’s representation in undergraduate textbooks has been less-than stellar, with somewhere between “a lot” and “all” of the textbooks that have been examined showing evidence of errors, a minority showing hostility, and that’s all provided the subject was even mentioned in the first place; not a good start. Nevertheless, the authors collected a sample of 15 academic textbooks from 2005 or later – six in sociology and nine in psychology – that saw some fairly regular use: out of a sample of around 1,500 sociology courses, one of those six books was used in about half of them, and a similar percentage of 1,200 psychology courses sampled used one of the nine psychology texts. The most widely-used of these texts were in around 20% and 10% of courses, respectively, so these books were seeing some fairly good popularity.

Of these 15 books, 3 did not discuss the theoretical foundations of evolutionary psychology and were discarded from the analysis; the remaining 12 books were examined for the areas in which evolutionary psychology was discussed, and any errors they made were cataloged. Of those 12 books, all of them contained at least one error, with the average number of errors per book hovering around 5 (allowing for the fact that they could make the same error more than once), with an average of 4 different categories of error per book. The most common of these errors, unsurprisingly, was the umbrella “strawman” category, where positions not held by evolutionary psychology are said to be representative of their actual positions (I believe the Thornhill and Palmer suggesting women “provoke rape” would fall into this category). The number of errors might not seem all that large at first glance, but once one considers that the average number of pages within the textbooks under consideration were around 6 for psychology and 3 for sociology, that’s around one or two errors a page.

Additionally, the errors that the authors found within these textbooks are cataloged at the end of their paper. Reading through the list should be more than little frustrating, if an entirely familiar experience, for anyone even moderately well-versed in evolutionary psychology. In accordance with the masturbation example listed above, there’s more than one instance in that list of writers suggesting that evolutionary researchers ignore the fact that people have sex for pleasure because we only focus on reproduction (for another example of this error, see here). Now there’s nothing wrong with being critical of evolutionary psychology, to be clear; criticisms are often the lifeblood of advancements. It is important, however, that one is at least familiar with the ideas they are going to be critical towards before attempting criticism, or the education of, others. This should sound like a basic point, but, then again, reading source material you’re discussing shouldn’t be something noteworthy that one gets compliments about.

“As long I don’t read it, I can still disagree with what I think it says…”

These are, of course, just the errors; there’s no consideration here of the degree to which topics are covered in sufficient depth. To the extent that people – teachers and undergraduates alike – are receiving an education from (or creating one based on) these textbooks, we should expect to see these errors repeated. In this case, we might actually hope that students are not reading their books since, in my mind, no education on the subject is likely better than a false sense of one. Now one might make the case that the authors of these textbooks don’t have the time to read everything they cite or cover it in the detail required for it to be of much use, meaning that we should expect errors like these to crop up. If that’s the case, though, it’s curious why anyone would rely on these textbooks as worthwhile sources of information. To put it in metaphorical terms, when it comes to providing information about EP, these textbooks seem about as a good as a tour of Paris taken via plane with a guide who have never been their himself. Not only is the experience unlikely to give you much of a sense for the city, it’s not the type of thing I would pay a lot of money for. While I certainly can’t speak to how well other topics are covered, I think there might be good reason to worry as well.

References: King, B. (2005). Human Sexuality Today. Pearson, NJ.

Winegard B., Winegard, B., & Deaner, R. (2014). Misrepresentations of evolutionary psychology in sex and gender textbooks. Evolutionary Psychology, 12, 474-508.