“In theory, theory and practice are the same. In practice, they are not.”
There is a relatively famous quote attributed to Michelangelo who was discussing his process of carving a statue: “I saw the angel in the marble and carved until I set him free”. Martin Nowak, in his book SuperCooperators (2011), uses that quote to talk about his admiration for using mathematical models to study cooperation. By stripping away the “noise” in the world, one can end up with some interesting conclusions. For instance, it was through this stripping away of the noise that led to the now-famous programming competition that showed us how successful a tit-for-tat strategy can be. There’s just one hitch, and it’s expressed in another relatively famous quote attributed to Einstein: “Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed.” Imagine instead that Michelangelo had not seen an angel in the marble, but rather a snake: he would have “released” the snake from the marble instead. That Michelangelo “saw” the angel in the first place seemed to preclude his seeing the snake – or any number of other possible images – that might have potentially been representable by the marble as well. I should probably also add that neither the snake nor the angel were actually “in” the marble in the first place…
The reason I bring up Nowak’s use of the Michelangelo quote is that both in his book and a recent paper (Nowak, 2012), Nowak stresses the importance of both (a) using mathematical models to reveal underlying truths by stripping away noise from the world, and (b) advocates for the readdition of that noise, or at least some of it, to make the models better at predicting real-world outcomes. The necessity of this latter point is demonstrated neatly by the finding that, as the rules of the models designed to assess cooperation shifted slightly, the tit-for-tat strategy no longer emerged as victorious. When new variables – ones previously treated as noise – are introduced to these games, new strategies can best tit-for-tat handily. Sometimes the dominant strategy won’t even remain static over time, shifting between patterns of near universal cooperation, universal defection, and almost anything in between. That new pattern of results doesn’t mean that a tit-for-tat strategy isn’t useful on some level; just that it’s usefulness is restricted to certain contexts, and those contexts may or may not be represented in any specific model.
Like Michelangelo, then, these theoretical models can “see” any number of outcomes (as determined by the initial state of the program and its governing rules); like Einstein, these models can also only “see” what they are programmed to see. Herein lies the tension: these models could be excellent for demonstrating the many things (like group selection works), but many of many those things which can be demonstrated in the theoretical realm are not applicable to the reality that we happen to live in (also like group selection). The extent to which those demonstrations are applicable to the real world relies on the extent to which the modeller happened to get things right. For example, let’s say we actually had a slab of marble with something inside it and it’s our goal to figure out what that something is: a metaphorical description of doing science. Did Michelangelo demonstrate that this something was the specific angel he had in mind by removing everything that wasn’t that angel from an entirely different slab of marble? Not very convincingly; no. He might have been correct, but there’s no way to tell without actually examining the slab with that something inside of it directly. Because of this, mathematical models do not serve as a replacement for experimentation or theory in any sense.
On top of that concern, a further problem is that, in the realm of the theoretical, any abstract concept (like “the group”) can be granted as much substance as any other, regardless of whether those concepts can be said to exist in reality; one has a fresh slab of marble that they can “see” anything in, constrained only by their imagination and programming skills. I could, with the proper technical know-how, create a mathematical model that demonstrates that people with ESP have a fitness advantage over those without this ability. By contrast, I could create a similar model that demonstrates that people without ESP have a fitness advantage over those with the ability. Which outcome will eventually obtain depends entirely on the ways in which I game my model in favor of one conclusion or the other. Placed in that light, (“we defined some strategy as working and concluded that it worked”) the results of mathematical modeling seem profoundly less impressive. More to the point, however, the outcome of my model says nothing about whether or not people actually have these theoretical ESP abilities in the first place. If they don’t, all the creative math and programming in the world wouldn’t change that fact.
As you can no doubt guess by this point, I don’t hold mathematical modeling in the same high esteem that Nowak seems to. While its theoretical utility is boundless, its practical utility seems extremely limited, relying on the extent to which the assumptions of the programmer approach reality. With that in mind, I’d like to suggest a few other details that have not yet seemed to have been included in these models of cooperation. That’s not to say that the inclusion of these variables would allow a model to derive some new and profound truths – as these models can only see what they are told to see and how they are told to see it – just that these variables might help, to whatever degree, the models better reflect reality.
The first of these issues is that these cooperation games seem to be played using an identical dilemma between rounds; that is to say there’s only one game in town, and the payoff matrices for cooperation and defection remain static. This, of course, is not the way reality works: cooperation is sometimes mutually beneficial, other times mutually detrimental, and still others only beneficial for one of the parties involved, and all that changes the game substantially. Yes, this means we aren’t strictly dealing with cooperative dilemmas anymore, but reality is not made up of strictly cooperative dilemmas, and that matters if we’re trying to draw conclusions about reality. Adding this consideration into the models would mean that behavioral strategies are unlikely to ever cycle between “always cooperate” or “always defect” as Nowak (2012) found that they did in his models. Such strategies are too simple-minded and underspecified to be practically useful.
A second issue involves the relative costs and benefits to cooperation and defection even within the same game. Sometimes defecting may lead to great benefits for the defector; at others, defecting may only lead to small benefits. A similar situation holds for how much of a benefit cooperation will bring to one’s partner. A tit-for-tat strategy could be fooled, so to speak, by this change of rules (i.e. I could defect on you when the benefits for me are great and reestablish cooperation only when the costs to cooperation are low). As cooperation will not yield identical payoffs over time more generally, cooperation will also not yield identical payoffs between specific individuals. This would make some people more valuable to have as a cooperative partner than others and, given that cooperation takes some amount of limited time and energy, this means competition for those valuable partners. Similarly, this competition can also mean that cooperating with one person entails simultaneously defecting against another (cooperation here is zero-sum; there’s only so much to go around). Competition for these more valuable individuals can lead to all sorts of interesting outcomes: people being willing to suffer defection for the privilege of certain other associations; people actively defecting on or punishing others to prevent those others from gaining said associations; people avoiding even trying to compete for these high value players, as their odds of achieving such associations are vanishingly low. Basically, all sorts of politically-wise behaviors we see from the characters in Game in Thrones that don’t find themselves represented in these mathematical models yet.
A final issue is that information that individuals in these games are exposed to: it’s all true information. In the non-theoretical realm, it’s not always clear as to whether someone you’ve been interacting with cooperated or defected, or the degree of effort they put into the venture even if they were on the cooperating side of the equation. If individuals in these games could reap the benefits of defecting while simultaneously convincing others that they had cooperated, that’s another game-changer. Modeling all of this is, no doubt, a lot of work, but potentially doable. It would lead to all sorts of new set of findings about which strategies worked and which one didn’t, and how, and when, and why. The larger point, however, is that the results of these mathematical models aren’t exactly findings; they’re restatements of our initial intuitions in mathematical form. Whether those intuitions are poorly developed and vastly simplified or thoroughly developed and conceptually rich is an entirely separate matter, as they’re all precisely as “real” in the theoretical domain.
References: Nowak, M. (2011). SuperCooperators: Altruism, evolution, and why we need each other to succeed. New York: Free Press
Nowak, M. (2012). Evolving cooperation Journal of Theoretical Biology, 299, 1-8 DOI: 10.1016/j.jtbi.2012.01.014