An Implausible Function For Depression

Recently, I was involved in a discussion about experimenter-induced expectation biases in performance, also known as demand characteristics. The basic premise of the idea runs along the following lines: some subjects in your experiment are interested in pleasing the experimenter or, more generally, trying to do “well” on the task (others might be trying to undermine your task – the “screw you” effect – but we’ll ignore them for now). Accordingly, if the researchers conducting an experiment are too explicit about the task, or drop hints as to what the purpose is or what results they are expecting, even hints that might seem subtle, they might actually create the effect they are looking for, rather than just observe it. However, the interesting portion of the discussion I was having is that some people seemed to think you could get something for nothing from demand characteristics. That is to say some people seem to think that, for instance, if the experimenter thinks a subject will do well on a math problem, that subject will actually get better at doing math.

Hypothesis 1: Subjects will now be significantly more bullet-proof than they previously were.

This raises the obvious question: if certain demand characteristics can influence subjects to perform better or worse at some tasks, how would such an effect be achieved? (I might add that it’s a valuable first step to ensure that the effect exists in the first place which, in the case of stereotype threat with regard to math abilities, it might well not) It’s not as if these expectations are teaching subjects any new skills, so whatever information is being made use of (or not being made use of, in some cases) by the subject must have already been potentially accessible. No matter how much they might try, I highly doubt that researchers are able to simply expect subjects into suddenly knowing calculus or lifting twice as much weight as they normally can. The question of interest, then, would seem to become: given that subjects could perform better at some important task, why would they ever perform worse at it? Whatever specific answer one gives for that question, it will inevitably include the mention of trade-offs, where being better at some task (say, lifting weights) carries costs in other domains (such as risks of injury or the expenditure of energy that could be used for other tasks). Subjects might perform better on math problems after exercise, for instance, not because the exercise makes them better at math, but because there are fewer cognitive systems currently distracting the math one.

This brings us to depression. In attempting to explain why so many people get depressed, there are plenty of people who have suggested that there is a specific function to depression: people who are depressed are thought to be more accurate in some of their perceptions, relative to those who are not depressed. Perhaps, as Neel Burton and, curiously, Steven Pinker suggest, depressed individuals might do better at assessing the value of social relationships with others, or at figuring out when to stop persisting at a task that’s unlikely to yield benefits.  The official title for this hypothesis is depressive realism. I do appreciate such thinking insomuch as researchers appear to be trying to explain some psychological phenomenon functionally. Depressed people are more accurate in certain judgments, being more accurate in said judgments leads to some better social outcomes, so there are some adaptive benefits to being depressed. Neat. Unfortunately, such a line of thinking misses the aforementioned critical mention of trade-offs: specifically, if depressed people are supposed to perform better at such tasks, if people have the ability to better assess social relationships and their control over them, why would people ever be worse at those tasks?

If people hold unrealistically positive cognitive biases about their performance, and these biases cause people to, on the whole, do worse than they would without them, then the widespread existence of those positive biases need to be explained. The biases can’t simply exist because they make us feel good. Not only would such an explanation be uninformative (in that it doesn’t explain why we’d feel bad without them), but it would also be useless, as “feeling good” doesn’t do anything evolutionary useful. Notwithstanding those issues, however, the depressive realism hypothesis doesn’t even seem to be able to explain the nature of depression very well; not on the face of it anyway. Why should increasing one’s perceptual accuracy in certain domains go hand-in-hand with low energy levels or loss of appetite? Why should women be more likely to be depressed than men? Why should increases in perceptual accuracy similarly increase an individual’s risk of suicidal behavior? None of those symptoms seem like the hallmark of good, adaptive design when considered in the context of overcoming other, unexplained, and apparently maladaptive positive biases.

“We’ve manged to fix that noise the car made when it started by making it unable to start”

So, while the depressive realism hypothesis manages to think about functions, it would appear to fail to consider other relevant matters. As a result, it ends up positing a seemingly-implausible function for depression; it tries to get something (better accuracy) for nothing, all without explaining why other people don’t get that something as well. This might mean that depressive realism identifies an outcome of being depressed instead of explaining depression, but even that much is questionable. This returns to the initial point I made, in that one wants to be sure that the effect in question even exists in the first place. A meta-analysis of 75 studies of depressive realism conducted by Moore & Fresco (2012) did not yield a great deal of support for the effect being all that significant or theoretically interesting. While they found evidence of some depressive realism, the effect size of that realism was typically around or less than a tenth of a standard deviation in favor of the depressed individuals; an effect size that the authors repeatedly mentioned was “below [the] convention for a small effect” in psychology. In many cases, the effect sizes were so close to zero that they might of as well have been zero for all practical purposes; in other cases it was the non-depressed individuals who performed better. It would seem that depressed people aren’t terribly more realistic; certainly not relative to the costs that being depressed brings. More worryingly for the depressive realism hypothesis, the effect size appeared to be substantially larger in studies using poor methods of assessing depression, relative to studies using better methods. Yikes.

So, just to summarize, what we’re left with is an effect that might not exist and a hypothesis purporting to explain that possible effect which makes little conceptual sense. To continue to pile on, since we’re already here, the depressive realism hypothesis seems to generate few, if any, additional testable predictions. Though there might well be plenty of novel predictions that flow from the suggestion that depressed people are more realistic than non-depressed individuals, there aren’t any that immediately come to my mind. Now I know this might all seem pretty bad, but let’s not forget that we’re still in the field of psychology, making this outcome sort of par for the course in many respects, unfortunate as that might seem.

The curious part of the depressive realism hypothesis, to me, anyway, is why it appears to have generated as much interest as it did. The meta-analysis found over 120 research papers on the topic, which is (a) probably not exhaustive and (b) not representative of any failures to publish research on the topic, so there has clearly been a great deal of research done on the idea. Perhaps it has something to do with the idea that there’s a bright side to depression; some distinct benefit that ought to make people more sympathetic towards those suffering from depression. I have no data that speaks to that idea one way or the other though, so I remain confused as to why the realism hypothesis has drawn so much attention. It wouldn’t be the first piece of pop psychology to confuse me in such a manner.

And if it confuses you too, feel free to stop by this site for more updates.

As a final note, I’m sure there are some people out there who might be thinking that though the depressive realism idea is, admittedly, lacking in many regards, it’s currently the best explanation for depression on offer. While such conceptual flaws are, in my mind, reason enough to discard the idea even in the event there isn’t an alternative on offer, there is, in fact, a much better alternative theory. It’s called the bargaining model of depression, and the paper is available for free here. Despite not being an expert on depression myself, the bargaining model seems to make substantially more conceptual sense while simultaneously being able to account for the existing facts about depression. Arguably, it doesn’t paint the strategy of depression in the most flattering light, but it’s at least more realistic.

References: Moore, M., & Fresco, D. (2012). Depressive realism: A meta-analytic review Clinical Psychology Review, 32 (6), 496-509 DOI: 10.1016/j.cpr.2012.05.004

2 comments on “An Implausible Function For Depression

  1. Rasputin on said:

    My, completely lacking in scientific basis, view of depression is that it’s sort of a general energy-saving mechanism that also gets co-opted into making sure that low status individuals don’t make any dangerous in-group moves they lack the social capital to have a decent chance of getting away with.

    The nice thing about this is that it fits pretty well with people in general, and Scandinavians in particular (?) being more depressed in the winter than the summer and Africans generally not being very depressed at all, despite having plenty of things to be unhappy about. In Africa there’s not the same scarcity-abundancy cycle of energy availability and generally those societies are much more winner-takes-all than Scandinavian. So the Scandinavian environment selects for people who calm down when food is scarce and has decent pay-offs for males who are mediocre and willing to submit to authority, whereas Africa selects for being active and some pretty “lively” intra-male competition.

    Conversely I’d expect mania-esque traits to follow the reverse pattern. That is, it being more prevalent among high-status individuals and more likely to happen when there’s lots of sun.

    Anyway, I have no idea how realistic the idea is, or if there’s evidence for or against.

    The bargaining idea doesn’t seem bad, but it doesn’t really account for lonely people being depressed. If there’s no one around to see you being miserable there doesn’t seem to be much point in being miserable.

    • Jesse Marczyk on said:

      You wouldn’t be the first to have suggested such an idea. It would, however, still leave issues like the suicidal thoughts or behaviors that accompany depression relatively unexplained. The same goes for the frequent loss of appetite. There’s a difference between conservation and a lack of self-care. The thinking about social capital is good, though it would require a measure of social capital which is probably very difficult to come by. Further, once you have the measure, depression should only really be likely to strike those who have little of it.