Our itinerary is: (1) skewed interpretations of other people’s actions; (2) sin; (3) building better tactics for political cooperation. If you understand fundamental attribution error already and know game theory backwards and forwards, you might want to skip Part 1 and go directly to Part 2.
(If want to get straight to the fun stuff about political cooperation, you can start with Part 3.)
I mentioned the fundamental attribution error in the Physical Anthropology post. Humans have a tendency to find patterns in all fields and phenomena, and in particular to make observations about an object and conclude that some of the observed traits explain other traits. When we observe one another, this tendency gets stronger still, and is especially notable in a tendency to attribute the actions of other people to inner qualities (personality, moral character, values, cognitive ability, etc.) rather than situational factors and external circumstances. For example, if you see a man wolf down his food, you’re more likely to assume that he’s a glutton or a slob than that he hasn’t eaten in two days, or has just learned his daughter is in an ambulance. If a woman cuts you off in traffic, you’re more likely to assume she’s a bitch or a bad driver than that she had to get out of the lane to avoid a hazard, or thought she was approaching a zipper merge. If you notice that a kid has been focused intensely on his textbook for hours at the same table in the library, you’re more likely to assume that he’s a disciplined student or a lover of learning than that he didn’t realize there was a test tomorrow and has to cram all his studying into one day, or that he made plans to go out drinking with friends and stayed in the library after they backed out.
You have surely, from time to time, gobbled food or driven aggressively yourself; and you know that in those cases there were special circumstances, that you are not a glutton or a bad driver in general; but when see other people acting in the same way, you immediately attribute their (mis)behavior to who they are, which you know is rarely how you explain your own behavior. That is the fundamental attribution error.
This error is connected to other important defects in reasoning, notably base rate neglect and the conjunction fallacy. Base rate neglect is basically over-eager abductive reasoning, focusing on the events most likely to produce a piece of evidence without considering how likely such an event would be in the absence of any evidence. Nearly all obvious attribution errors involve neglect of base rates. Let’s concede bitches are likelier than nice people to cut someone off in traffic; but if bitches are rare, most drivers who cut you off will be nice people. For the sake of argument say that a nice person will only cut someone off in traffic on 1 trip out of 20. In a city with 1M commuters on the road during rush hour then each day 100,000 drivers will cut someone else off during the commute, even if all 1M of the drivers are nice. For the attribution “bitch” even to be probable, the rate at which bitches cut people off in traffic needs to be higher than the “nice” rate by a factor higher than ratio of nice drivers to bitches.
The classic conjunction fallacy experiment was done by Tversky and Kahneman (1983). The experimental subjects read a story about a young woman getting a B.A. in philosophy and deeply concerned with social justice issues; they were then asked whether ten years later this woman was more likely to be (a) a bank teller, or (b) a bank teller who is active in the feminist movement. Pause a moment to reflect. For all X, “bank tellers who are X” are a subset of “bank tellers”, and a fortiori it is always more likely that someone will be a bank teller than some special kind of bank teller. Nonetheless, Tversky and Kahneman’s subjects thought -b- was more likely: presumably this has something to do with a desire to attribute someone’s life decisions to known inner qualities (her obsession with social justice).
Fundamental attribution error, base rate neglect, and conjunction fallacy: I didn’t choose these labels, which imply that ways of thinking in question are erroneous, neglectful or fallacious. Ways of thinking are tools, not theorems, and the most common ways of thinking are likely to be very good tools. (They get to be common because they solve problems efficiently.) I disavow any attempt to purge yourself of these mental habits! Let’s just say that FAE causes errors that range from glaring (sometimes) to debatable (most of the time). Carefully constructed experiments create artificial situations where our reasoning very clearly goes astray; the clarity of experimental models allows us to visualize a certain type of error, and thus to see how a common way of thinking might not be as simple as it appears.
An obvious corollary of FAE in general (the tendency to attribute actions to inner qualities) is a tendency to attribute guilty actions to guilty motives. If a man is brawling: he’s a violent man, he’s aggressive, impulsive, he’s a bully. If you catch a man in a lie: he’s a liar, dishonest, dishonorable, unscrupulous, sociopathic.
There are two initial points we should make about these attributions. The first (true of all FAE) is that they are always traits that would lead us to predict someone with that attribute would do the same thing again, so if the attributions are inaccurate we end up with wildly inaccurate ideas about his risk of re-offending. The second is that they prejudge the moral culpability of the act. Consider: you may over-generalize and label someone a “liar” or a “thief” too quickly, but you can also use the label well, and there are certainly relationships of the sort lie : liar :: theft : thief :: murder : murderer. But what about … :: broken window : vandal ? … :: violent sex : rapist ?
The problem here is that not all broken windows are acts of vandalism. Sometimes boys are playing baseball; sometimes you lock yourself out, and…! Vandalism is a very salient explanation for a broken window, and sometimes it is likelier than any other (or simply likely, i.e. likelier than all other alternatives combined). A propensity for vandalism is the sort of thing that leads people to break windows. When you assume someone who broke a window is a vandal you appear to be drawing an inference from his action (which you’ve observed) to his character (which you haven’t), but in the process you are also tacitly upgrading the original act from “breaking a window” to “vandalism”. This tacit move begs the question, since the difference between a mere broken window and a vandalized window can’t just be the fact that the window is broken!
We care about why something happened the first time largely because we want to understand whether it will happen again. Knowing whether an event might recur helps us know whether to prepare for it, or try to prevent it. We can’t ignore an event’s moral qualities because these affect how we react, too. One driver steering abruptly into the next lane could describe a zipper merge (mandatory!) or cutting someone off (dangerous, rude!) with equal accuracy. A ball hitting a glass pane with a certain momentum physically describes both a baseball mishap (unlucky!) and vandalism (criminal!).
The more guilt we attribute to an action, the more we worry about the guilty party’s next crime, the more negative emotions we feel towards him, and the more we want to punish him. We don’t want kids to stop playing baseball entirely because they’re neurotic about breaking windows; we don’t want good people who are pillars of their community to have their lives ruined over a mistake or a one-time lapse. But actions have consequences, and if anyone is going to face consequences, the consensus is it should be the guilty: the vandals, rapists, murderers and liars of the world.
The ultimate result is that one feels very differently about questionable conduct when it is one’s own conduct (or the conduct of a close friend). You are intimately acquainted with your own exemplary personal qualities, qualities that normally steer you away from misdeeds, as well as the key extenuating circumstances that forced your hand in any particular situation. For strangers, you immediately jump to malice: FAE. This extends to whether one thinks of the “questionable conduct” as a crime. For oneself, extenuations are easy to find and the most innocent possible description of whatever happened is best. For anyone else, their guilt is enough to prove a crime occurred.
Of course, if we don’t want to punish people who aren’t really bad people, and when we are in possession of all the facts (like when we did the deed ourselves) we nearly always see that people aren’t nearly as bad as we had thought, then the implication is that most people’s (and perhaps our entire society’s?) attitudes towards criminals are excessively cruel. If we understood the criminal’s side of the story, if we tried to grasp all the extenuating circumstances, if we saw that anyone would react similarly to the chain of events he faced, then chances are we wouldn’t want to punish him as severely. If you follow out this Officer Krupke analysis to its strictest logical conclusion, maybe we wouldn’t do much punishing at all.
But wait! It’s time for some…
Or maybe we have the right attitude for the wrong reason.
Crime, as a social problem, is fundamentally a question of deterrence. Deterrence is a question of the interdependent actions resulting from the choices made by many interacting agents: and this is the subject of game theory. Game theory is actually far more interesting and useful than anything I have to say about sin; if you aren’t familiar with the field, start with Schelling and Axelrod. As an introduction:
- You can model crime and punishment, or anti-social behavior and its deterrence more generally, as two agents deciding whether to help or to hurt each other (more technically, whether to cooperate or to defect) over and over again.
- Helping your partner is always better for him, and worse for you. Hurting him is always better for you, and worse for him. But switching from hurting to helping benefits your partner more than it hurts you, so both partners like it when they both cooperate better than when they both defect. (This is formal structure of the Prisoner’s Dilemma.)
- If you only interact with another agent once, you can be nice or not, but you’ll always do better yourself if you hurt him. If you help him he may be grateful, but after he discovers you helped him your mutual interaction is over and you don’t benefit.
- However, if you’re interacting with the same partner over and over, you can influence whether he helps you in the future by what you do today. Remember, though, you interact with him in two ways: by helping or hurting.
- The only tool you have available is to try to figure out whether he wants to cooperate or not and then, if he doesn’t want to cooperate, hurt him until he realizes that you’ll only stop punishing him if starts to cooperate.
- One small problem: hurting each other is how you try to take advantage of one another, but it’s also your only way to punish each other. So from any one case where he hurts you, you have no way to tell whether he hurt you to punish you (trying to signal to you, “Hey! We both need to cooperate!”) or to take advantage of you.
- Thus the challenge of a repeated helping/hurting interaction (a.k.a. “iterated Prisoner’s dilemma”) is to try to figure out from your partner’s past history of interactions with you what his intentions are; but remember, the rules he’s following and his underlying strategy may depend on what you’re trying to do.
This is not just a thought experiment. Political scientists, mathematicians, and other eggheads actually design programs which pick help/hurt according to fixed rules which execute that programmer’s underlying strategy, and then pit their programs against one another. All strategies do very well when they’re going head-to-head against a few strategies, and relatively poorly against others; but in a tournament, the winner needs to perform well overall with a mix of possible opponents. You might want to pause for a moment to think about what kind of strategy you think would do best. The details are fascinating and you should read them. To give away the ending: the best overall strategy, the one that encourages the most cooperation and the least retaliation, is — (drumroll) — to hurt a partner if he just hurt you in the previous round and to help him if he just helped you, tit-for-tat.
Or to phrase it differently: if you are trying to figure out who is guilty (who is likely to hurt someone) only because you want people hurting each other as little as possible, the best solution is to treat anyone who hurts anyone else as though they were guilty, and punish them accordingly. If you think you can come up with a more sophisticated system of punishment that will do a better job, you should try to create a strategy that does better than Tit-For-Tat at the Prisoner’s Dilemma first.
(Tip: you can’t, there’s a formal theorem and everything. Seriously, the details are cool.)
Thus it turns out that reacting to a misdeed as though the perpetrator was inherently criminal isn’t such a stupid idea after all. I’m not saying your belief that the guy is evil is accurate. It probably depends on the specifics of what you believe (if there are any specifics, which is unlikely), but if you were to free-associate in an unguarded moment and write down your vague intuitions about the perpetrator’s attributes as though they were precise factual statements, it’s pretty likely we could draw out some inconsistencies or unwarranted predictions and indict you for big-league fundamental attribution error. But if we ignore the facts (such as they are) and concentrate on what happens when you act as though those facts were true, what can we criticize? Someone defects; you learn that he has defected, and you start to adopt an attitude towards him that leads you to retaliate as though he were likely to hurt others again in the future; and the result is harmony and cooperation.
Reacting-as-though-guilty is not merely permissible but constructive, and perhaps even necessary to a healthy society. Even if guilt-attributions are generated by thinking similar to the fundamental attribution error, that doesn’t mean there is any easy way to replace the “bias” with a coldly rational resolution to retaliate against all misdeeds, tit-for-tat. Action requires emotion, and calm emotions that inhibit rash overreactions in normal circumstances aren’t adequate for a response to someone who violates those norms. (I’ve discussed emotions here.) The crime that inspires righteous anger, the attitudes towards the perpetrator, the desire to see him punished: of these three, who is to say which are the causes and which the effects, or whether they can be disentangled from one another at all? Perhaps our inconsistent and exaggerated beliefs about the guilty party are an unavoidable consequence of the strong emotions his punishment requires. These spurious “beliefs” could even be our inner monologue’s equivalent of trash-talk, a rhythmic patter of insults to keep the spirit fiery and the mind focused during a tense confrontation. Far from having second thoughts because our punitive attributions of guilt resemble fundamental attribution error, we should wonder whether FAE represents the evolutionary triumph of the tit-for-tat strategy.
And when some faction tries to deaden these retaliatory emotions with flurries of facts and hair-splitting, when these people portray perfectly natural reactions to crime as dumb and embarrassing — we should wonder about that, too.
Series: Loving the Sinner