The scientific method is a pretty useful tool for assisting people in doing things related to testing hypotheses and discerning truth – or as close as one can come to such things. Like the famous Churchill quote about democracy, the scientific method is the worst system we have for doing so, except for all the others. That said, the scientists who use the method are often not doing so in the single-minded pursuit of truth. Perhaps phrased more aptly, testing hypotheses is generally not done for its own sake: people testing hypotheses are typically doing so for other reasons, such as raising one’s status and furthering one’s career in the process. So, while the scientific method could be used to test any number of hypotheses, scientists tend to try and use for certain ends and to test certain types ideas: those perceived to be interesting, novel, or useful. I imagine that none of that is particularly groundbreaking information to most people: science in theory is different from science in practice. A curious question, then, is given that we ought to expect scientists from all fields to use the method for similar reasons, why are some topics to which the scientific method is applied viewed as “soft” or “hard” (like psychology and physics, respectively)?
One potential reason for this impression is that these non-truth-seeking (what some might consider questionable) uses to which people attempt to put the scientific method could simply be more prevalent in some fields, relative to other ones. The further one strays from science in theory to science in practice, the softer your field might be seen as being. If, for instance, psychology was particularly prone to biases that compromises the quality or validity of the data, relative to other fields, then people would be justified in taking a more critical stance towards the findings from it. One of those possible biases involves tending to only report the data consistent with one hypothesis or another. As the scientific method requires reporting the data that is both consistent and inconsistent with one’s hypothesis, if only one of those is being done, then the validity of the method can be compromised and you’re no longer doing “hard” science. A 2010 paper by Fanellli provides us with some reason to worry on that front. In that paper, Fanelli examined approximately 2500 papers randomly drawn from various disciplines to determine the extent to which positive results (those which support one or more of the hypotheses being tested statistically) dominate in the published literature. The Psychology/Psychiatry category sat at the top of the list, with 91.5% of all published papers reporting positive results.
While that number may seem high, it is important to put the figure into perspective: the field at the bottom of that list – the one which reported the fewest positive results overall – were the Space Sciences, with 70.2% of all the sampled published work reporting positive results. Other fields ran a relatively smooth line between the upper- and lower-limits, so the extent to which the fields differ in positive results dominating is a matter of degree; not kind. Physics and Chemistry, for instance, both ran about 85% in terms of positive results, despite both being considered “harder” sciences than psychology. Now that the 91% figure might seem a little less worrying, let’s add some more context to reintroduce the concern: those percentages only consider whether any positive results were reported, so papers that tested multiple hypotheses tended to have a better chance of reporting something positive. It also happened that papers within psychology tended to test more hypotheses on average than papers in other fields. When correcting for that issue, positive results in psychology were approximately five-times more likely than positive results in the space sciences. By comparison, positive results physics and chemistry were only about two-and-a-half-times more likely. How much cause for concern should this bring us?
There are two questions to consider, before answering that last question: (1) what are the causes of these different rates of positive results and (2) are these differences in positive results driving the perception among people that some sciences are “softer” than others? Taking these in order, there are still more reasons to worry about the prevalence of positive results in psychology: according to Fanelli, studies in psychology tend to have lower statistical power than studies in physical science fields. Lower statistical power means that, all else being equal, psychological research should find fewer – not greater – percentages of positive results overall. If psychological studies tend to not be as statistically powerful, where else might the causes of the high-proportion of positive results reside? One possibility is that psychologists are particularly likely to be predicting things that happen to be true. In other words, “predicting” things in psychology tends to be easy because hypotheses tend to only be made after a good deal of anecdata has been “collected” by personal experience (incidentally, personal experience is a not-uncommonly cited reason for research hypotheses within psychology). Essentially, then, predictions in psychology are being made once a good deal of data is already in, at least informally, making them less predictions and more restatements of already-known facts.
A related possibility is that psychologists might be more likely to engage in outright-dishonest tactics, such actually collecting their data formally first (rather than just informally), and then making up “predictions” that restate their data after the fact. In the event that publishers within different fields are more or less interested in positive results, then we ought to expect researchers within those fields to attempt this kind of dishonesty on a greater scale (it should be noted, however, that the data is still the data, regardless of whether it was predicted ahead of time, so the effects on the truth-value ought to be minimal). Though greater amounts of outright dishonesty is a possibility, it would be unclear as to why psychology would be particularly prone to this, relative to any other field, so it might not be worth worrying too much about. Another possibility is that psychologists are particularly prone to using questionable statistical practices that tend to boost their false-positive rates substantially, an issue which I’ve discussed before.
There are two issues above all the others stand out to me, though, and they might help to answer the second question – why psychology is viewed as “soft” and physics as “hard”. The first issue has to do with what Fanelli refers to as the distinction between the “core” and the “frontier” of a discipline. The core of a field of study represents the agreed upon theories and concepts on which the field rests; the frontier, by contrast, is where most of the new research is being conducted and new concepts are being minted. Psychology, as it currently stands, is largely frontier-based. This lack of a core can be exemplified by a recent post concerning “101 greats insights from psychology 101“. In the list, you’ll find the word “theory” used a collective three times, and two of those mentions concern Freud. If you consider the plural – “theories” – instead, you’ll find five novel uses of the term, four of which mention no specific theory. The extent to which the remaining two uses represent actual theories, as opposed to redescriptions of findings, is another matter entirely. If one is left with only a core-less frontier of research, that could well send the message that the people within the field don’t have a good handle on what it is they’re studying, thus the “soft” reputation.
The second issue involves the subject matter itself. The “soft” sciences – psychology and its variants (like sociology and economics) – seem to dabble in human affairs. This can be troublesome for more than one reason. A first reason might involve the fact that the other humans reading about psychological research are all intuitive psychologists, so to speak. We all have an interest in understanding the psychological factors that motivate other people in order to predict what they’re going to do. This seems to give many people the impression that psychology, as a field, doesn’t have much new information to offer them. If they can already “do” psychology without needing explicit instructions, they might come to view psychology as “soft” precisely because it’s perceived as being easy. I would also note that this suggestion ties neatly into the point about psychologists possibly tending to make many predictions based on personal experience and intuitions. If the findings they are delivering tend to give people the impression that “Why did you need research? I could have told you that”, that ease of inference might cause people to give psychology less credit as a science.
The other standout reason as to why psychology might pose people with the soft perception is that, on top of trying to understand other people’s psychological goings-on, we also try to manipulate them. It’s not just that we want to understand why people support or oppose gay marriage, for instance, it’s that we might also want to change their points of view. Accordingly, findings from psychology tend to speak more directly to issues people care a good deal about (like sex, drugs, and moral goals. Most people don’t seem to argue over the latest implications from chemistry research), which might make people either (a) relatively resistant to the findings or (b) relatively accepting of them, contingent more on one’s personal views and less on the scientific quality of the work itself. This means that, in addition to many people having a reaction of “that is obvious” with respect to a good deal of psychological work, they also have the reaction of “that is obviously wrong”, neither of which makes psychology look terribly important.
It seems likely to me that many of these issues could be mediated with the addition of a core to psychology. If results need to fit into theory, various statistical manipulations might become somewhat easier to spot. If students were learning how to think about psychology, rather than to think about and remember lists of findings which they feel are often trivial or obviously wrong, they might come away with a better impression of the field. Now if only a core could be found…
References: Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332