Can Rube Goldberg Help Us Understand Moral Judgments?

Though many people might be unfamiliar with Rube Goldberg, they are often not unfamiliar with Rube Goldberg machines: anyone who has ever seen the commercial for the game “Mouse Trap” is at least passingly familiar with them. Admittedly, that commercial is about two decades old at this point, so maybe a more timely reference is in order:OK Go’s music video for “This too shall pass” is a fine demonstration (or Mythbusters, if that’s more your cup of tea). The general principle behind a Rube Goldberg machine is that it completes an incredibly simple task in an overly-complicated manner. For instance, one might design one of these machines to turn on a light switch, but that end state will only be achieved after 200 intervening steps and hours of tedious setup. While these machines provide a great deal of novelty when they work (and that is a rather large “when”, since there is the possibility of error in each step), there might be a non-obvious lesson they can also teach us concerning our cognitive systems designed for moral condemnation.

  Or maybe they can’t; either way, it’ll be fun to watch and should kill some time.

In the literature on morality, there is this concept known as the doctrine of double effect. The principle states that actions with harmful consequences can be morally acceptable provided a number of conditions are met: (1) the act itself needs to be morally neutral or better, (2) the actor intends to achieve some positive end through acting; not the harmful consequence, (3) the bad effect is not a means to the good effect, and (4) the positive effects outweigh the negative ones sufficiently. While that might all seem rather abstract, two concrete and popular examples can demonstrate the principle easily: the trolley dilemma and the footbridge dilemma. Taking these in order, the trolley problem involves the following scenario:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. Unfortunately, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person.

In this dilemma, most people who have been surveyed (about 90% of them) suggest that it is morally acceptable to pull the lever, diverting the train onto the side track. It also fits the principle of double effect nicely: (1) the act (redirecting the train) is not itself immoral, (2) the actor intends a positive consequence (saving the 5) and not the negative one (1 dies), (3) the bad consequence (the death) is not a means of achieving the outcome, but rather a byproduct of the action (redirecting the train), and (4) the lives saved substantially outweigh the lives lost.

The footbridge dilemma is very similar in setup, but different in a key detail: in the footbridge dilemma, rather than redirecting the train to a sidetrack, a person is pushed in front of it. While the person dies, that causes the train to stop before hitting the 5 hikers, saving their lives. In this case, only about 10% of people say it’s morally acceptable to push the man. We can see how double effect fails in this case: (1) the act (pushing the man) is relatively on the immoral side of things, (2) the death of the person being pushed in intended, and (3) the bad consequence (the man dying) is the means by which the good consequence is achieved; the fact that the positive consequences outweigh the negative ones in terms of lives saved is not enough. But why should this be the case? Why do consequences alone not dictate our actions, and why can factors as simple as redirecting a train versus pushing a person make such tremendous differences in our moral judgments?

As I suggested recently, the answer to both of those questions can be understood through beginning our analysis of morality with an analysis of condemnation. These questions can be rephrased in that light to the following forms: “Why might people wish to morally condemn someone for achieving an outcome that is, on the whole, good?” and, “Why might people be less inclined to condemn certain outcomes, contingent on how they’re brought about?” The answer to the first question is fairly straightforward: I might wish to morally condemn someone because their actions (or failing to morally condemn them) might have some direct costs on me, even if they benefit others. For instance, I might wish to condemn someone for their behavior in the trolley or footbridge problem if it’s my friend dying, rather than a stranger. That some generally morally positive outcome was achieved is irrelevant to me if it was costly from my perspective. Natural selection doesn’t design adaptations for the good of the group, so that the group’s welfare is increased seems besides the point. Of course, a cost is a cost is a cost, so why should it matter to me at all if my friend was killed by being pushed or having the train sent towards him?

“DR. TEDDY! NOOOO!”

Part of that answer depends on what other people are willing to condemn. Trying to punish someone for their actions is not always cheap or easy: there’s always a chance of retaliation by the punished party or their allies. After all, a cost is a cost is a cost to both me and them. This social variable means that attempting to punish others without additional support might be completely ineffective (or at least substantially less effective) at times. Provided that other parties are less likely to punish negative byproducts, relative to negative intended outcomes, this puts pressure on you to attempt and persuade others that the person you want to punish acted with intent, whereas it puts the reverse pressure on the actor; to convince others they did not intend that bad outcome. This brings us back to Rube Goldberg, the footbridge dilemma, and a slight addition to doctrine of double effect.

There are some who argue that the doctrine of double effect isn’t quite complete. Specifically, there is an unappreciated third type of action: one in which a person acts because a negative outcome will obtain, but they do not intend that outcome (what is known as “triple effect”). This distinction is a bit trickier to grasp, so another example will help. Say that we’re again talking about the footbridge dilemma: there is a man standing on the bridge over the tracks with the oncoming train scheduled to hit the 5 hikers. However, we can pull a lever which will drop the man onto the track where he will be hit, thus stopping the train and saving the five. This is basically identical to the standard footbridge problem, and most people would deem it unacceptable to pull the lever. But now let’s consider another case: again, the man is standing on the bridge, but the mechanism that will drop him off the bridge is a light sensor. If light reflects off the train onto the sensor, the bridge will drop the man, he will die, and the 5 will be saved. Seeing the oncoming train, someone, Rube-Goldberg style, shines a spotlight on the train, illuminating it; the illumination hits the sensor, dropping the man onto the track, killing him and saving the five hikers.

There are some (Otsuka, 2008) that argue there is no meaningful difference between these two cases, but in order to make that claim, they need to infer something about the actor’s intentions in both cases, and precisely what one infers affects the subsequent shape of the analysis. Were one to infer that there is really only one problem to be solved – the train that going to kill 5 people – then the intentions of the person pulling the lever to illuminate the train and pulling the lever to drop the man are equivalent and equally condemnable. However, there is another inference one could make in the light case, as there are multiple facets to the problem: the train will both kill 5 and the train isn’t illuminated. If one intends to solve the latter problem (so now there will be an illuminated train about to kill 5 people) one also, as a byproduct of solving that problem, causes both the problem of 5 people getting killed to be solved and the death of man who got dropped onto the track. Now one could argue, as Otsuka (2008) does, that such an example fails because people could not be plausibly motivated to solve the non-illuminated part of the problem, but that seems like largely a matter of perspective. The addition of the light variable introduces, if even to some small degree, plausible deniability capable of shifting the perception of an outcome from intended to byproduct. Someone pulling the lever could have been doing so in order to illuminate the train or to drop the man onto the track, but it’s not entirely unambiguous which is the case.

“Well how was I supposed to know I was doing something dangerous?”

The light case is also a relatively simple one: there are only 3 steps (shine light on train, light opens door, door opening causes man to fall and stop train), and perfect knowledge is assumed (the person shining the light knew this would happen). Changing either or these variables would likely have the effect of altering the blame of the actor: if the actor didn’t know about the light sensor or the man on the footbridge, condemnation would likely decrease; if the action involved 10 steps, rather than 3, this could potentially introduce further plausible deniability, especially if any of those steps involved the actions of other people. It would be in the actor’s best interests to thus deny their knowledge of the outcome, or separate the outcome from their initial action as broadly as possible. Conversely, someone looking to condemn the actor would need to do the reverse.

Now maybe this all sounds terribly abstract, but there are real-life cases to which similar kinds of analysis can apply. Consider cases where a child is bullied at school and later commits suicide. Depending on one’s perspective in these kinds of cases, one might condemn or fail to condemn the bullies for the suicide (though one might still blame them for the bullying); one might also, however, condemn the parents for not being there for the child as they should have, or one might blame no one but the suicide victim themselves. As one thinks about ways in which the suicide could have been prevented, there are countless potential Rube-Goldberg kinds of variables in the casual chain to point to (violent media, the parents, the bullies, the friends, their diet, the suicide victim, the school, etc), the modification of any of which might have prevented the negative outcome. This gives condemners (who may wish to condemn people for initially-unrelated reasons) a wide-array of potential plausible targets. However, each of these potential sources also gives the other sources some way of mitigating and avoiding blame. While such strategic considerations tend to make a mess of normative moral theories, they do provide us the required tools to actually begin to understand morality itself.

References: Otsuka, M. (2008). Double Effect, Triple Effect and the Trolley Problem: Squaring the Circle in Looping Cases. Utilitas, 20, 92-110 DOI: 10.1017/S0953820807002932

Comments are closed.