The beginning of Kristor’s recent post on categorical imperatives is excellent. However, as he begins to drift into the language of game theory to explain the importance of honoring God and kin, he makes a few points I would like to take issue with.
(If you did not follow the link: note that Kristor prefers “profect” to “cooperate”.)
I. This seems inaccurate:
Defection is all that the defective mind can see. So the Marxian mind sees all profective moves as veiled defections. Only defective minds – minds themselves immured in the strategy of defection – could find this account of human life convincing.
Highly coop-coop environments have second-order dynamics that channel possible defections or partial defections into hypocrisy, pharisaism, and so on. In a degenerating society, cooperators will tend to assume that anyone who wants to defect has already defected, because the costs of defection (and in particular the costs of open defection) are low anyway. But in a highly coop-coop society which has a known level of (unidentified) defections, cooperative people will start to become aware that some of these defections must be the work of people who appear to be cooperative; i.e., they are following the strategy of defecting wherever it is safe to do so. In fact, some of these veiled-defectors may be cooperating specifically in order to lull their targets into a false sense of security!
This relationship between collective cooperation and hypocrisy appears at the social level, but it is even more striking when looking at institutions within society. Obscure cults or associations can rely exclusively on the tiny minority of the population who share an obsessive dedication to the group’s ideals. So long as it still has a reservoir of outliers to draw on and its own irrelevance shields it from the notice of amoral sociopaths, a growing institution can count on high levels of internal cooperation.
But as an institution grows, it attracts more attention and eventually outstrips the supply of idealists. It will soon have ordinary members who are “in it for the paycheck” but try to blend in with the institution’s culture and its pieties. Eventually, it will attract sociopaths who blend in with its culture precisely to deceive the remaining idealists (and the clock-punchers who look to idealists for guidance). If a sociopath succeeds he can maximize his share of the power and influence, since the institution will try to entrust the power it is amassing to those most likely to cooperate.
So even if we refrain from making any particular assumptions about the psychology and morality of Kristor’s cooperators, we can see how they could fall into skepticism and cynicism about the actual motives of those who surround them. They don’t know who the hidden defector is — it could be anyone! From there it is only a short step to It could be everyone, if one remembers that many people following the “defect wherever it is safe to do so” strategy may never find a chance to defect safely and secretly.
II. But we can go further. Note that Kristor describes responding to a defective environment with cooperation as “simply stupid”.
In defection, it is simply stupid to proceed other than by defecting at every turn.
This is the exact rational egoism the bolsheviks are attributing to law-abiding bourgeois! “I’ll cooperate with you so long as cooperation helps me, and I’ll slaughter you when it starts to look like things are going downhill.”
I had a debate with Jim a few weeks ago on a similar topic. Jim can’t quite accept that ordinary, reasonable people want to see signs that you see them as “on your team”, as an amicus rather than a hostis. Game theory is hard even in a lecture hall. It’s much harder when you extend the models to include the sorts of wrinkles which affect real-life situations; and hardest of all is figuring out what kind of situation you’re in, from a game-theoretic perspective, as it is happening to you. (And remember: in most games it matters whether you expect your opponent to play intelligently or not!)
If game theory is not always easy, and if for people of average intelligence it’s never easy, then the strategic arguments in favor of the (cooperate, cooperate) equilibrium become massively less convincing when the equilibrium you are discussing is very fragile.
Infinitely iterated prisoner’s dilemmas do have a (cooperate, cooperate) equilibrium, under certain reasonable assumptions, but how many infinitely-iterated social interactions have you been in? Even life-long interactions are slender threads in the overall fabric of social life. You can finesse this shortage of infinitely-iterated interactions by saying instead that the players don’t know when the interaction will end… but realistically, how often are you in interactions where you really have no idea whether or not this will be the last interaction? You might reply that while certain interactions are predictably short-lived, the reputation you gain in them affects your other interactions (and in particular, affects whether others want to enter long-term interactions with you); fine, but what percentage of your one-off interactions affect your long-term reputation? And how do you know whether the interaction will affect your counterpart’s reputation? (That’s a necessary part of the strategic argument for cooperation, too.)
Game-theoretic arguments for moral behavior are, I believe, generally correct, but such arguments are surprising fragile. They hinge on whether other people will consistently find them convincing and, worse still, whether they will be able to consistently tell whether other people have found them convincing. (If I own railroad tracks that run by your property, am I “defecting” when I have a train drive by you, or are you “defecting” when you threaten to try to prevent me? What if the whistle from the train wakes you up in the middle of the night? What if the soot from the locomotive is making your children sick? What if the sparks from the tracks started a fire on your property?)
One source of fragility is the players’ total indifference to each other’s welfare. The belief that you want what’s good for other people prima facie, before entering into any considerations about whether it’s also good for you, is a powerful lubricant for social friction. For one thing, it means that a final-round defection in a short-term interaction (from which both players can iterate backwards and decide to defect in the first round) is no longer a big threat. For another, it means that tail-risk driving lots of social conflict (the risk that your counterpart will screw you over big-time for a trivial gain, simply because he has no reason not to) can be bracketed.
There are some (narcissistic but not malevolent) psych disorders where people intentionally screw up, hurt themselves, hurt others, just to see the reassuring evidence that their friends and family still value them when they’re not perfect. This is pathological, but the exception probes the rule: everyone needs love and respect, most people can’t navigate through coordination problems without it, and this need is so powerful that a tiny minority have a self-destructive drive to defect to feed it.
Paradoxically, this means that the type of community where the strategic equilibrium of (cooperate, cooperate) is stable will itself be quite fragile, because a community whose members care about each other so much that they will continue to cooperate in the face of signs of defection is ripe for invasion/exploitation by sociopaths, vampires, and callous outsiders. Conversely, the type of community where strangers typically view one another as cattle is likely to remain that way, but any of its ongoing cooperative interactions (particularly interactions between ordinary, not-so-smart people) are quite fragile and likely to fall apart.
The bottom line is you’re unlikely to get a (cooperate-cooperate) equilibrium in any community unless — pace Kristor — you would consider it perfectly sensible to continue to cooperate with a defector, under some circumstances, simply because you care about the defector for his own sake even if he is currently screwing everything up. This is the love of the pelican for her chick; this is the love of the father of the parable for his prodigal son. Perhaps we can also compare it to God’s eternal love for his fallen creation.
III. Finally, a minor point about use of game theoretic imagery as metaphor. Note that Kristor jumps immediately from the “Hobbesian war of all against all” to “the defect ↔ defect game phase” without specifying which game we are talking about. Presumably he means the Prisoner’s Dilemma. Probably he assumes everyone in his audience has seen PD juxtaposed with Hobbes’ state of nature before, and will draw the connection; or maybe he assumes none of them have ever heard of any formal game besides PD.
Any NxN game can be described in terms of cooperation and defection so long as (cooperate, cooperate) is better than (defect, defect) for both parties. The actual pay-off values of the matrix determine whether the game is a Prisoner’s Dilemma or some other game. If, for example, 1’s pay-off for (defect, cooperate) is lower than (cooperate, cooperate) but higher than (cooperate, defect), he is in a Stag Hunt. Strategies that are dominant in one game may not be in another; for example, defect is the dominant strategy in PD but both (cooperation, cooperate) and (defect, defect) are Nash equilibria in SH, so neither strategy is dominant.
Or I should say: defect is the dominant strategy in a single-iteration PD. This is really my point. Even if you do specify what game two players are playing, knowing the pay-off matrix really only gives you a superficial understanding of the strategy of the game if you do not specify the parameters of the game in question. How many rounds the game has, how much information the players have about their counterparts, how the counterpart is chosen, and how the results are aggregated all strongly effect which strategies “work” and which don’t. Maybe this is too much info to present to a popular audience… but if your point doesn’t even depend on which game you’re talking about, why use the game-theoretic metaphor?
In Kristor’s case I don’t think this really undermines his point (although admittedly, I’m uncertain how literally I should try to draw out the game-theoretic implications of some of his claims). That’s why I’m bringing it up now, because Kristor is clearly comfortable with game theory and is not in any way using the language to mislead or obscure. If someone actually did use clichés and metaphors from game-theory in an unintelligible or counterproductive way, I would be perplexed about how to approach the problem without insulting him; so it’s best to flag the issue in this very benign case, to head it off elsewhere.
An example of the end-result of “game theory as metaphor” is the fuzzy use of “Schelling point” in the React-o-Sphere. A Schelling point is a feature of coordination games where:
- …any strategy can potentially lead to a high (or maximal) payoff, for at least one of one’s partner’s possible strategies.
- …any strategy can also lead to zero (or very low) payoff, for nearly all (typically: all but one) of one’s partner’s possible strategies.
- …the zero-payoff outcomes are symmetrical for both partners (if you lose I lose, and vice-versa).
- …the high-payoff outcomes are either symmetrical, or at least nearly equivalent, relative to the zero payoff.
- …either the player will get the same payoff in any of the high payoff scenarios, or at least there is no unique highest payoff (many outcomes give the same maximum payoff).
- …the players cannot communicate.
- …the strategies have features which do not improve the payoff (possibly features which are wholly irrelevant to the formal description of the game) which make some strategies more salient than others.
The last point is the key. Schelling describes experiments where he shows soldiers a map and says “If you parachuted down onto a random location on this map, where would you go to rendezvous with a second parachutist, assuming you couldn’t choose a meeting-point in advance, don’t know where the other man landed, and have no way to communicate with him?” If the map has three hills and a bridge, the soldier decides to go wait for the other man on the bridge. If the map has three bridges and a hill, the soldier decides to go to the top of the hill. Whatever feature on the map is uniquely salient — whether due to its centrality, its symmetry, or simply because there’s only one of it — is the Schelling point. Both parachutists will expect the other to head there first, because it’s the only point on the map which doesn’t have corresponding points with an absolutely equal claim: even if they both thought the other would go to one of the three bridges, there is still only a 1/3 chance they pick the same bridge.
(If one of the joint-payoffs in a coordination game is best for both players, it’s no surprise when they independently choose the two strategies which lead to that payoff. Thus condition #5; no Schelling point unless there’s no best payoff. But this doesn’t mean that the payoffs all need to be identical; the reason for the complex qualification to #5 is that the most salient feature of a certain strategy might be that its pay-off is lower than all of the others! If the possible payoffs on a 6×6 matrix are (100, 100), (100, 100), (100, 100), (99, 99), (100, 100), you can bet that the players will spontaneously converge on the worst payoff.)
People on the Right say “Schelling point” a lot. I think most of the time they mean “equilibrium”, or “Nash equilibrium”. Sometimes they mean “dominant strategy”. Frequently there is some kind of connotation of the emergence of spontaneous order over time. It’s often quite unclear. Usually it doesn’t completely obscure the argument. But it certainly doesn’t clarify it; and over time it creates uncertainty about what the heck other people mean by “Schelling point”, and how they will understand your use of it.
If we couldn’t communicate with one another about how to use “Schelling point”, then the “Schelling point” Schelling point would be to use “Schelling point” to refer to Schelling points. This strategy has the unique virtue of being the original definition of the term, and also has the strongest relationship to the work of Thomas Schelling. Since we can communicate about this choice there is (sadly) no “Schelling point” Schelling point; rather than coordinating our language based on salience we can discuss the pros and cons, and gradually modify our usage over time. So let this be my contribution to the discussion: use game theoretic terms when you have a model in mind and need terms to illustrate it, but when you need a metaphor go elsewhere.
(Update, 2017/06/27: I’ve replied to Kristor’s reaction here.)