Due to a particularly engaging high school teacher, my undergraduate minor was in economics. Upon taking a number of classes in economics at the college level, I realized that most of the assumptions made by economists about how people should be expected to behave were about as useful for understanding human behavior as most of my undergraduate psychology classes; that is to say not very. It was through Dan Ariely’s books that I was initially exposed to behavioral economics; a field which seemed to take a stand against the nonsensical assumptions of traditional economics. Happy as I was to see that first step, my enthusiasm was dampened somewhat by the fact that behavioral economics was not evolutionary economics. Economists, behavioral or otherwise, were still dealing with the human mind, and they lacked a good theory for understanding how and why the mind works. On a related note, I just finished Dan Ariely’s latest offering, The (Honest) Truth About Dishonesty: How We Lie to Everyone – Especially Ourselves. (2012).
Due to a miscommunication with Amazon, I actually ended up getting my copy of this book for free, and, beyond simply saving money, I’m quite happy I did for a simple reason: I don’t think Dan’s new book is really worth spending the money on, (the book jacket suggested a retail of $27) especially if you’ve already read his first two offerings. In the (ostensibly selfless) interests of saving others time and money, here’s the main finding of the research presented in the book: given the opportunity to cheat, most people will cheat to some (relatively small) degree, with very few will going all out and cheating as much as possible. Of course, the precise degree to which people cheat is flexible, and various contexts make it more or less likely that people will cheat. This might suggest that there are certain parts of the mind monitoring various environmental cues in an attempt to determine when cheating would be profitable, and to what extent one should cheat.
The research that Dan reviews cuts against what he calls the “Simple Model of Rational Crime”, in which people consciously think through the costs and benefits when it comes to deciding whether or not to commit a crime (or act immorally, more generally). Standard economic assumptions don’t seem to pan out well, and anyone familiar with Dan’s previous work will already know that. Unfortunately, Ariely replaces that simple model with his own – arguably simpler – model that goes like this:
In a nutshell, the central thesis is that our behavior is driven by two opposing motivations. On the one hand, we want to view ourselves as honest, honorable people. We want to be able to look at ourselves in the mirror and feel good about ourselves (psychologists call this ego motivation). On the other hand, we want to benefit from cheating as get as much money as possible. (p.27)
Right from the start, Ariely’s central thesis is deeply flawed. As others have pointed out (Kurzban, 2010), “feeling good” about ourselves is not a plausible function for any part of our psychology. Evolution is (metaphorically) blind to what organisms feel; it can only see what organisms do. An organism that feels terrible but does useful things would win out against an organism that feels great but doesn’t do useful things every single time. A quick example should demonstrate why. Let’s say feeling good is actually important, in and of itself. There are two organisms presented with potential benefit from cheating: the first organism cheats, but only cheats a little bit in order to maintain its positive sense that it’s an honest individual; after all, it didn’t cheat that much, and it wasn’t doing any real harm, so it’s probably still a morally upstanding creature. The second organism cheats as much as it can and feels pretty good about its cheating; it doesn’t try to feel good by justifying its behavior, it just feels good about what it does generally.
That example should make the problem with Ariely’s central thesis stand out in stark relief: why should an organism care about seeing itself as a morally upstanding creature, and why should seeing itself as such hinge on its perception of its own integrity? By focusing on a conscience-centric model without making the function of such a perspective clear, Ariely misses the mark. As DeScioli & Kurzban (2009) suggest, we cannot understand the function of conscience without first examining condemnation. In a world where others judge our actions, and those judgments cause those others to behave in certain ways towards us, conscience can serve as a defense mechanism. Rather than risking costly punishment and social sanction from behaving in a manner others perceive as immoral, potentially detrimental actions can be avoided in the first place.
Now one might counter that, in the experiments Ariely reports on, there was no risk of subjects being caught or punished, and further that the subjects knew this; since any fear of punishment should have been, effectively removed, concerns for condemnation can’t explain these results. However, to do so would be to make the basic error of failing to understand the difference between adapted and adaptive. Just because someone might consciously report that they understand there was no real risk, it doesn’t mean other modules in their brain came to the same conclusion.
One final point I’d like to touch on is the chapter concerning self-control. Not to rely too heavily on Kurzban (2010) here, but self-control is not like a muscle, and thinking of it as such leads one to an incorrect model of the mind. (For references, see here, here, and here). Since an incorrect model of the mind seems to be the central thesis of the book, it’s at least consistent in that regard. There are, no doubt, some interesting things to be learned from the research in Dan’s book. However, you’ll need to figure them out, more or less, on your own.
References: Ariely, D. (2012). The honest truth about dishonesty: How we lie to everyone else – especially ourselves. New York, NY: HarperCollins
DeScioli P, & Kurzban R (2009). Mysteries of morality. Cognition, 112 (2), 281-99 PMID: 19505683
Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.