The Trolley Problem and Other Thought Experiments
I spend a lot of time reading and thinking about brains. This is because brains (and their ethereal, higher-level superstrates*, minds) are fascinating little buggers. New discoveries are being made every day about how one part of the brain connects to some other part or how brain damage in one spot leads to changes in a person’s behavior. It's very easy to look at a diagram of a brain's labyrinthine neural wiring and presume that all the answers are before you. * "Superstrates" is, unfortunately, not a real word, but this best I could come up with as an opposite of substrates. It's so easy to get lost in one brain that I often forget that the real experience of living is not about a single brain but the interactions of many brains. Human brains are in the heads of people and people are social animals. The functions and capabilities of brains evolved not so a brain could simply sit in the dark confines of one's skull but to aid in human interaction. Since human interaction encapsulates a host of topics - morality, politics, love, friendship, culture - we can presume that the study of brains and minds has something to say about all these things as well. Studying the social brain is quite interesting but also challenging. Let's say you want to find out what parts of the brain get activated when someone struggles with jealousy. It could be as easy as finding a jealous person, sticking them in an MRI and see what parts of their brain are active. But there's a problem: how do you find a jealous person? A person may be jealous out there in the real world, but once you get them into the coffin-like insides of an MRI they probably have other thoughts on their mind. Stories come into to play here, particularly the kind we call thought experiments. Instead of searching about for jealous people, scientists can create written scenarios that are designed to evoke certain behaviors or feelings in subjects. Then the subject’s reactions can be studied in a variety of ways, including, but not limited to, MRIs. I've been reading the book "Moral Tribes" by neuroscientist/philosopher Joshua Greene and he describes many of these thought experiments. They're quite fascinating and they do seem to provide insight into the machinery of our social behavior.
The Prisoner's Dilemma
If you've ever watched Law & Order (or any similar crime drama), you've seen this. Basically, two criminals have to figure out whether their partner is going to screw them over. So what's the best thing to do? As the wiki notes... Because betraying a partner offers a greater reward than cooperating with him, all purely rational self-interested prisoners would betray the other, thus the only possible outcome for two purely rational prisoners is double betrayal. They will both serve two years in prison as opposed to a year each had they kept their mouths shut. The self-interested behavior does not lead to the best outcome, in the rules of this thought experiment. Of course, people don't always screw each other in such situations. Which makes an good case that humans are not purely rational actors, despite the desires of certain economic models that they be so. But it gets more interesting when a variation of the game, called the Iterative Prisoner's Dilemma, is played. This game is like the original but players play the game over and over again. So, if you got screwed last time, you now have a chance to screw over the guy who screwed you. This scenario is a bit more like real life where we remember what people have done to us in the past and we can seek revenge. In this game the best strategy is definitely not to look out for yourself (because your opponent will remember you screwed them and get you back in the next game.) In a tournament of the Iterative Prisoner's Dilemma it was shown that...
Basically, the advice is that you should presume your partner isn't going to screw you over, but if he or she does, then you don't trust them. Of course, you may ask, "how does this relate to someone like myself who is not is a criminal gang?" But we see situations similar to the Prisoner's Dilemma all over the place (especially the iterative version.) It's alive in romance, business, sports and friendship... any situation where we are unsure of a fellow's motives. And we often see people follow the "tit for tat" logic recommended above. For instance, you lend your hammer to your neighbor if he asks to borrow it, but if he does't return it, you don't lend him a screwdriver.
The Public Goods Game In the basic game, subjects secretly choose how many of their private tokens to put into a public pot. The tokens in this pot are multiplied by a factor (greater than one and less than the number of players, N) and this "public good" payoff is evenly divided among players. Each subject also keeps the tokens they do not contribute. In the game, the purely self-interested tactic is to let everyone else put money in while you keep all of your money in your pocket. You get a nice payoff while investing nothing. This sort of person is often thought of as a "free rider." But from the point of view of the group as a whole, it's best if everyone invests all their tokens, thereby maximizing the return. An additional component can be added to the game so that players can punish free riders. A player can "pay" one token to ensure that a low-contributing player loses three tokens. Suddenly being purely self-interested isn't the best tactic. You can easily see how this game relates to the issues such as taxation and debates over paying "one's fair share." People's responses to this game illuminate their mindset over such issues. Interestingly, responses vary substantially by geography and culture. Players in Boston, and Copenhagen were very contribution friendly, while players in Athens* and Riyadh were contribution averse. And, according to surveys, people in these low contributing cities have permissive attitudes towards acts such as tax evasion and turn-style jumping (e.g. getting a free ride on the metro.) In these places, free riding has less of a social stigma attached. (To be clear, it isn't necessarily so that people in high contributing cities are "nicer"; it could be they simply fear punishment more.) * I once had a taxi driver in Athens rip me off so I'm especially willing to believe any aspersions cast upon the city's character. Also worth noting: people playing the Public Goods Game are more willing to contribute if they have a short time to think about it (10 seconds.) If we have to make a quick judgment, it's better to go along with the group.
The Trolley Problem A runaway trolley is headed for five railway workmen who will be killed if it proceeds on its present course. You are standing on a footbridge spanning the tracks, in between the oncoming trolley and the five people. Next to you is a railway workman wearing a large backpack. The only way to save the five people is to push this man off the footbridge and onto the tracks below. The man will die as a result, but his body and backpack will stop the trolley from reaching the others. (You can't jump yourself because you, without a backpack, are not big to stop the trolley and there's no time to put one on.) Is it morally acceptable to save the five people by pushing this stranger to his death? The Trolley Problem is the classic question: is it all right to sacrifice one person to save more than one? (We have to presume in this scenario that the people involved are morally equal. The workmen aren't a group of pedophiles.) According to Greene, people struggle to answer the question. (I've asked several people myself with similar results.) Ultimately, most people say, no, it's not okay to push the man off the bridge. Now, there is a variation of the Trolley Problem called the Switch Dilemma. In this situation (quoting "Moral Tribes")... ... a runaway trolley is headed down the tracks towards five workmen who will be killed if nothing is done. You can save these five people by hitting a switch that will turn the trolley onto a side track. Unfortunately there is a single workman on the side-track who will be killed if you hit the switch. (Here's a graphic that shows both versions of the Trolley Problem.) People don't really struggle with this one. Most say it is acceptable to hit the switch and sacrifice the single workman. Of course, as you may have observed, these are really two versions of the same question: should we sacrifice one person to save five? Thus, shouldn’t we have the same answer each time? Interestingly, there are people who do say it's okay to sacrifice one person for many in both cases. These are people with brain damage in an area near your forehead called the ventromedial prefrontal cortex (VMPFC). Patients with brain damage in this area are considered emotionally deficit but, with emotions out of the way, they exhibit a certain Spock-like logic; they can see the "greater good." (However, as Antonio Damasio's book "Descartes' Error" makes clear, such patients suffer all sorts of troubles from their emotional flat-lining.) What does this all mean? It seems likely that the moral intuition that prods most people to reject pushing the man off the bridge arises, somehow, out of the neural circuitry of the VMPFC. Beyond that, not much can conclusively said though Greene's book is filled with interesting pontifications on the matter. As a whole, the results of these thought experiments seem to indicate that we're are not, purely rational, self-interested agents, but not purely altruistic persons either. We lie somewhere in the middle. And if one contemplates the "compartmentalization" that is implied by the VMPFC observations, another idea occurs. We may be made up of various brain modules that encourage competing behaviors in response to different situations. As a result, we are often conflicted over which of these "voices" we should follow. Moral conflict is almost literally a shouting match between different "voices" in our heads. |
HOME
- LINKS - SEARCH
Columns - Features
- Interviews - Fiction
- GuestBook - Blogs
View ForbistheMighty.com for more
sin and wackiness!!!
Email Publisher