Defection

861668ee26c548f60281f39815c20c9cThe beginning of Kristor’s recent post on categorical imperatives is excellent. However, as he begins to drift into the language of game theory to explain the importance of honoring God and kin, he makes a few points I would like to take issue with.

(If you did not follow the link: note that Kristor prefers “profect” to “cooperate”.)

I. This seems inaccurate:

Defection is all that the defective mind can see. So the Marxian mind sees all profective moves as veiled defections. Only defective minds – minds themselves immured in the strategy of defection – could find this account of human life convincing.

Highly coop-coop environments have second-order dynamics that channel possible defections or partial defections into hypocrisy, pharisaism, and so on. In a degenerating society, cooperators will tend to assume that anyone who wants to defect has already defected, because the costs of defection (and in particular the costs of open defection) are low anyway. But in a highly coop-coop society which has a known level of (unidentified) defections, cooperative people will start to become aware that some of these defections must be the work of people who appear to be cooperative; i.e., they are following the strategy of defecting wherever it is safe to do so. In fact, some of these veiled-defectors may be cooperating specifically in order to lull their targets into a false sense of security!

This relationship between collective cooperation and hypocrisy appears at the social level, but it is even more striking when looking at institutions within society. Obscure cults or associations can rely exclusively on the tiny minority of the population who share an obsessive dedication to the group’s ideals. So long as it still has a reservoir of outliers to draw on and its own irrelevance shields it from the notice of amoral sociopaths, a growing institution can count on high levels of internal cooperation.

But as an institution grows, it attracts more attention and eventually outstrips the supply of idealists. It will soon have ordinary members who are “in it for the paycheck” but try to blend in with the institution’s culture and its pieties. Eventually, it will attract sociopaths who blend in with its culture precisely to deceive the remaining idealists (and the clock-punchers who look to idealists for guidance). If a sociopath succeeds he can maximize his share of the power and influence, since the institution will try to entrust the power it is amassing to those most likely to cooperate.

So even if we refrain from making any particular assumptions about the psychology and morality of Kristor’s cooperators, we can see how they could fall into skepticism and cynicism about the actual motives of those who surround them. They don’t know who the hidden defector is — it could be anyone! From there it is only a short step to It could be everyone, if one remembers that many people following the “defect wherever it is safe to do so” strategy may never find a chance to defect safely and secretly.

II. But we can go further. Note that Kristor describes responding to a defective environment with cooperation as “simply stupid”.

In defection, it is simply stupid to proceed other than by defecting at every turn.

This is the exact rational egoism the bolsheviks are attributing to law-abiding bourgeois! “I’ll cooperate with you so long as cooperation helps me, and I’ll slaughter you when it starts to look like things are going downhill.”

I had a debate with Jim a few weeks ago on a similar topic. Jim can’t quite accept that ordinary, reasonable people want to see signs that you see them as “on your team”, as an amicus rather than a hostis. Game theory is hard even in a lecture hall. It’s much harder when you extend the models to include the sorts of wrinkles which affect real-life situations; and hardest of all is figuring out what kind of situation you’re in, from a game-theoretic perspective, as it is happening to you. (And remember: in most games it matters whether you expect your opponent to play intelligently or not!)

If game theory is not always easy, and if for people of average intelligence it’s never easy, then the strategic arguments in favor of the (cooperate, cooperate) equilibrium become massively less convincing when the equilibrium you are discussing is very fragile.

Infinitely iterated prisoner’s dilemmas do have a (cooperate, cooperate) equilibrium, under certain reasonable assumptions, but how many infinitely-iterated social interactions have you been in? Even life-long interactions are slender threads in the overall fabric of social life. You can finesse this shortage of infinitely-iterated interactions by saying instead that the players don’t know when the interaction will end… but realistically, how often are you in interactions where you really have no idea whether or not this will be the last interaction? You might reply that while certain interactions are predictably short-lived, the reputation you gain in them affects your other interactions (and in particular, affects whether others want to enter long-term interactions with you); fine, but what percentage of your one-off interactions affect your long-term reputation? And how do you know whether the interaction will affect your counterpart’s reputation? (That’s a necessary part of the strategic argument for cooperation, too.)

Game-theoretic arguments for moral behavior are, I believe, generally correct, but such arguments are surprising fragile. They hinge on whether other people will consistently find them convincing and, worse still, whether they will be able to consistently tell whether other people have found them convincing. (If I own railroad tracks that run by your property, am I “defecting” when I have a train drive by you, or are you “defecting” when you threaten to try to prevent me? What if the whistle from the train wakes you up in the middle of the night? What if the soot from the locomotive is making your children sick? What if the sparks from the tracks started a fire on your property?)

One source of fragility is the players’ total indifference to each other’s welfare. The belief that you want what’s good for other people prima facie, before entering into any considerations about whether it’s also good for you, is a powerful lubricant for social friction. For one thing, it means that a final-round defection in a short-term interaction (from which both players can iterate backwards and decide to defect in the first round) is no longer a big threat. For another, it means that tail-risk driving lots of social conflict (the risk that your counterpart will screw you over big-time for a trivial gain, simply because he has no reason not to) can be bracketed.

There are some (narcissistic but not malevolent) psych disorders where people intentionally screw up, hurt themselves, hurt others, just to see the reassuring evidence that their friends and family still value them when they’re not perfect. This is pathological, but the exception probes the rule: everyone needs love and respect, most people can’t navigate through coordination problems without it, and this need is so powerful that a tiny minority have a self-destructive drive to defect to feed it.

Paradoxically, this means that the type of community where the strategic equilibrium of (cooperate, cooperate) is stable will itself be quite fragile, because a community whose members care about each other so much that they will continue to cooperate in the face of signs of defection is ripe for invasion/exploitation by sociopaths, vampires, and callous outsiders. Conversely, the type of community where strangers typically view one another as cattle is likely to remain that way, but any of its ongoing cooperative interactions (particularly interactions between ordinary, not-so-smart people) are quite fragile and likely to fall apart.

The bottom line is you’re unlikely to get a (cooperate-cooperate) equilibrium in any community unless — pace Kristor — you would consider it perfectly sensible to continue to cooperate with a defector, under some circumstances, simply because you care about the defector for his own sake even if he is currently screwing everything up. This is the love of the pelican for her chick; this is the love of the father of the parable for his prodigal son. Perhaps we can also compare it to God’s eternal love for his fallen creation.

III. Finally, a minor point about use of game theoretic imagery as metaphor. Note that Kristor jumps immediately from the “Hobbesian war of all against all” to “the defect ↔ defect game phase” without specifying which game we are talking about. Presumably he means the Prisoner’s Dilemma. Probably he assumes everyone in his audience has seen PD juxtaposed with Hobbes’ state of nature before, and will draw the connection; or maybe he assumes none of them have ever heard of any formal game besides PD.

Any NxN game can be described in terms of cooperation and defection so long as (cooperate, cooperate) is better than (defect, defect) for both parties. The actual pay-off values of the matrix determine whether the game is a Prisoner’s Dilemma or some other game. If, for example, 1’s pay-off for (defect, cooperate) is lower than (cooperate, cooperate) but higher than (cooperate, defect), he is in a Stag Hunt. Strategies that are dominant in one game may not be in another; for example, defect is the dominant strategy in PD but both (cooperation, cooperate) and (defect, defect) are Nash equilibria in SH, so neither strategy is dominant.

Or I should say: defect is the dominant strategy in a single-iteration PD. This is really my point. Even if you do specify what game two players are playing, knowing the pay-off matrix really only gives you a superficial understanding of the strategy of the game if you do not specify the parameters of the game in question. How many rounds the game has, how much information the players have about their counterparts, how the counterpart is chosen, and how the results are aggregated all strongly effect which strategies “work” and which don’t. Maybe this is too much info to present to a popular audience… but if your point doesn’t even depend on which game you’re talking about, why use the game-theoretic metaphor?

In Kristor’s case I don’t think this really undermines his point (although admittedly, I’m uncertain how literally I should try to draw out the game-theoretic implications of some of his claims). That’s why I’m bringing it up now, because Kristor is clearly comfortable with game theory and is not in any way using the language to mislead or obscure. If someone actually did use clichés and metaphors from game-theory in an unintelligible or counterproductive way, I would be perplexed about how to approach the problem without insulting him; so it’s best to flag the issue in this very benign case, to head it off elsewhere.

An example of the end-result of “game theory as metaphor” is the fuzzy use of “Schelling point” in the React-o-Sphere. A Schelling point is a feature of coordination games where:

  1. …any strategy can potentially lead to a high (or maximal) payoff, for at least one of one’s partner’s possible strategies.
  2. …any strategy can also lead to zero (or very low) payoff, for nearly all (typically: all but one) of one’s partner’s possible strategies.
  3. …the zero-payoff outcomes are symmetrical for both partners (if you lose I lose, and vice-versa).
  4. …the high-payoff outcomes are either symmetrical, or at least nearly equivalent, relative to the zero payoff.
  5. …either the player will get the same payoff in any of the high payoff scenarios, or at least there is no unique highest payoff (many outcomes give the same maximum payoff).
  6. …the players cannot communicate.
  7. the strategies have features which do not improve the payoff (possibly features which are wholly irrelevant to the formal description of the game) which make some strategies more salient than others.

The last point is the key. Schelling describes experiments where he shows soldiers a map and says “If you parachuted down onto a random location on this map, where would you go to rendezvous with a second parachutist, assuming you couldn’t choose a meeting-point in advance, don’t know where the other man landed, and have no way to communicate with him?” If the map has three hills and a bridge, the soldier decides to go wait for the other man on the bridge. If the map has three bridges and a hill, the soldier decides to go to the top of the hill. Whatever feature on the map is uniquely salient — whether due to its centrality, its symmetry, or simply because there’s only one of it — is the Schelling point. Both parachutists will expect the other to head there first, because it’s the only point on the map which doesn’t have corresponding points with an absolutely equal claim: even if they both thought the other would go to one of the three bridges, there is still only a 1/3 chance they pick the same bridge.

(If one of the joint-payoffs in a coordination game is best for both players, it’s no surprise when they independently choose the two strategies which lead to that payoff. Thus condition #5; no Schelling point unless there’s no best payoff. But this doesn’t mean that the payoffs all need to be identical; the reason for the complex qualification to #5 is that the most salient feature of a certain strategy might be that its pay-off is lower than all of the others! If the possible payoffs on a 6×6 matrix are (100, 100), (100, 100), (100, 100), (99, 99), (100, 100), you can bet that the players will spontaneously converge on the worst payoff.)

People on the Right say “Schelling point” a lot. I think most of the time they mean “equilibrium”, or “Nash equilibrium”. Sometimes they mean “dominant strategy”. Frequently there is some kind of connotation of the emergence of spontaneous order over time. It’s often quite unclear. Usually it doesn’t completely obscure the argument. But it certainly doesn’t clarify it; and over time it creates uncertainty about what the heck other people mean by “Schelling point”, and how they will understand your use of it.

If we couldn’t communicate with one another about how to use “Schelling point”, then the “Schelling point” Schelling point would be to use “Schelling point” to refer to Schelling points. This strategy has the unique virtue of being the original definition of the term, and also has the strongest relationship to the work of Thomas Schelling. Since we can communicate about this choice there is (sadly) no “Schelling point” Schelling point; rather than coordinating our language based on salience we can discuss the pros and cons, and gradually modify our usage over time. So let this be my contribution to the discussion: use game theoretic terms when you have a model in mind and need terms to illustrate it, but when you need a metaphor go elsewhere.

(Update, 2017/06/27: I’ve replied to Kristor’s reaction here.) 

Advertisements

14 thoughts on “Defection

  1. “This relationship between collective cooperation and hypocrisy appears at the social level, but it is even more striking when looking at institutions within society. Obscure cults or associations can rely exclusively on the tiny minority of the population who share an obsessive dedication to the group’s ideals. So long as it still has a reservoir of outliers to draw on and its own irrelevance shields it from the notice of amoral sociopaths, a growing institution can count on high levels of internal cooperation.

    But as an institution grows, it attracts more attention and eventually outstrips the supply of idealists. It will soon have ordinary members who are “in it for the paycheck” but try to blend in with the institution’s culture and its pieties. Eventually, it will attract sociopaths who blend in with its culture precisely to deceive the remaining idealists (and the clock-punchers who look to idealists for guidance). If a sociopath succeeds he can maximize his share of the power and influence, since the institution will try to entrust the power it is amassing to those most likely to cooperate.”

    The following article, which is part of a series, presents a very different model to your suggestion:

    https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/

    Like

    1. Good link. I’m pretty sure I’ve read that before, but it’s sort of coming back to me slowly. This, I think, is a much more important hypothesis about institutional structure:

      https://meaningness.com/geeks-mops-sociopaths

      (which I believe Rao or whoever references in that Ribbonfarm link). I think I only ever read the first essay in that series. How significant are the later installments?

      IIRC, Rao is trying to model choosing “cooperate” in an institutional/corporate setting as overconfidence and/or gullibility wrt to the willingness and ability of the corporation to reward the cooperator. Firms need cooperators as mules to pull the firm along (he has an almost Marxian conception of the exploitation of the workers relative to the “rationally correct” value of their work) but they need people who are good at strategy and rational optimization in positions of responsibility, because those are the people who will respond to incentives wrt the success or failure of their own unit, and so amoral rationalist managers groom amoral rationalist slackers for future management roles. Something like that, right?

      I think this is an interesting observation but (a) the exception rather that the rule in the corporate world and (b) probably a second-order phenomenon that is parasitic on the construction of a successful institution which is animated by a genuine ethos (even if that ethos is simply “Make money for the shareholders, let’s all get rich together”), as it is taken over by sociopaths and increasingly becomes a nakedly sociopathic machine.

      (Btw my posts on False Consciousness, on Institutions (“Machiavellian Strategic Fundamentals”) and on Winning Coalitions say more on these topics if you *really needed* to know exactly my view.)

      Like

  2. Thanks for this searching and careful critique. Some responses that I hope shall clear up the difficulties without in any way vitiating the value of your essay:

    I.

    This seems inaccurate:

    Defection is all that the defective mind can see. So the Marxian mind sees all profective moves as veiled defections. Only defective minds – minds themselves immured in the strategy of defection – could find this account of human life convincing.

    Highly coop-coop environments have second-order dynamics that channel possible defections or partial defections into hypocrisy, pharisaism, and so on. In a degenerating society, cooperators will tend to assume that anyone who wants to defect has already defected, because the costs of defection (and in particular the costs of open defection) are low anyway. But in a highly coop-coop society which has a known level of (unidentified) defections, cooperative people will start to become aware that some of these defections must be the work of people who appear to be cooperative;

    I should have been more careful to make clear that both minds and societies are located somewhere on a spectrum from perfect defection to perfect profection. Minds at or near complete defection are prone to apprehend all acts of others as defections more or less veiled. They are, i.e., paranoid. And one way to get such a mind is to defect at its expense, repeatedly. There is a training effect.

    But normal or noble minds, that are nearer to perfect profection on the spectrum, can see both profection and defection. They are in a position to pity the paranoid defector, even as they take steps to avoid suffering his defections. Good people and societies can be suspicious, cautious. And so they ought to be: Trust but verify, as a notably canny profector once said; gentle as doves, sly as serpents.

    II.

    In defection, it is simply stupid to proceed other than by defecting at every turn.

    This is the exact rational egoism the Bolsheviks are attributing to law-abiding bourgeois! “I’ll cooperate with you so long as cooperation helps me, and I’ll slaughter you when it starts to look like things are going downhill.”

    Exactly. The Bolshevik analysis of society is correct, so long as we (wrongly) take society to be essentially defective. Under Marxism, society is *nothing but* defective power relations. Marxists see profection as veiled defection. Marxism offers them no other way to see it. It’s self-sealing, like Hell: locked on the inside.

    Nevertheless even for normal and noble minds, as an endlessly iterated game approaches equilibrium at pervasive defection, and the Marxian analysis gets correct in the same way that a broken clock is correct twice a day, it becomes suicidally foolish to expect anything other than defection at every turn, and suicidal to profect on the basis of such an expectation.

    Game theory is hard even in a lecture hall. It’s much harder when you extend the models to include the sorts of wrinkles which affect real-life situations; and hardest of all is figuring out what kind of situation you’re in, from a game-theoretic perspective, as it is happening to you.

    True. Same goes for physics. Yet somehow we manage to figure out the trajectories more or less properly, most of the time, so as not to get killed.

    Infinitely iterated prisoner’s dilemmas do have a (cooperate, cooperate) equilibrium, under certain reasonable assumptions, but how many infinitely-iterated social interactions have you been in?

    Yes. This is why cities are more hazardous – morally, ergo corporeally – than villages, families, and other little platoons. It is why the little platoons are so critical to any larger scale social order. If you are in my little platoon and I see you defecting at the expense of a counterparty from another, then I will treat you with some suspicion henceforth: your reputation will suffer, even among those who are inclined to trust you the most, and whom you are therefore most prudent in trusting. And as your reputation goes, so go your fortunes.

    One source of fragility is the players’ total indifference to each other’s welfare. The belief that you want what’s good for other people prima facie, before entering into any considerations about whether it’s also good for you, is a powerful lubricant for social friction.

    … The bottom line is you’re unlikely to get a (cooperate-cooperate) equilibrium in any community unless – pace Kristor – you would consider it perfectly sensible to continue to cooperate with a defector, under some circumstances, simply because you care about the defector for his own sake even if he is currently screwing everything up. This is the love of the pelican for her chick; this is the love of the father of the parable for his prodigal son.

    Yes! So, familiar relations are crucial to the cohesion of profective society – a point I emphasized in the post:

    Pervasive defection is vulnerable to supersession by some greater degree of cooperation only in virtue of familiar relations, that are by Nature high trust and altruistic: profect ↔ profect. Families can be the seeds of a pervasive profection; are its foundation and last redoubt. Families have a shot at resisting invasion or perversion by an environment of defection, and at fostering profective communities that can grow.

    Thus the crucial importance of honoring parents. Filial piety entails honoring siblings, and their spouses and children. It entails honoring all your kin. On that basis, only, can a generally profective society be built. Profection is an artifact of kinship. This then also demonstrates the crucial importance of the nation; which is to say, of a set of people bound to each other by blood; by shared genetic heritage. Within such a nation, profection can be superdurable, and such nations can be difficult to conquer.

    This is why mediaeval knights took into their households the sons of their brothers in arms, and married their daughters to their friends, their lords, and their vassals.

    III.

    Kristor jumps immediately from the “Hobbesian war of all against all” to “the defect ↔ defect game phase” without specifying which game we are talking about. Presumably he means the Prisoner’s Dilemma.

    Yes, Prisoner’s Dilemma endlessly iterated with perfect information about the records of counterparties – with, as we might call it, market perfection such as might be achieved in little platoons or by using Big Data. I should have made that more clear in the post.

    In such games strategies that require historical information – Tit for Tat or Tit for Two Tats – can arise. “Historical information” is just a way of saying “reputation.”

    If the game population recombines and reproduces its strategies under selection pressure, historical strategies can wither or prosper depending on their accumulated wealth in historical payoffs vis-à-vis the wealth of competing strategies. They can thus dwindle or spread in an isolated population – a nation. After a sufficiently large number of generations, counterparties superficially identifiable as fellow nationals are relatively safe bets even if the details of their personal histories are obscure.

    The Hobbesian War is the zero of society, properly so called. I don’t know of any instances of that equilibrium. The Ik may have come close. Hobbesian war is vanishingly close to extinction, so it may simply be coterminous with the equilibrium of death.

    … use game theoretic terms when you have a model in mind and need terms to illustrate it, but when you need a metaphor go elsewhere.

    I didn’t intend a metaphor, but a strict gedanken model. Admittedly simplified and unrealistically pure, like all models. And, like all models, a species of metaphor, and edifying qua metaphor only in virtue of some formal similarity between the model and the system to which it refers – which is to say, verisimility of the model.

    Liked by 1 person

    1. Thank you for these extensive comments. I have written some scattered replies in a new post; I thought your reply was interesting enough to hoist up out of comments. (I’m sorry I don’t have the time to be weave my thoughts together as coherently you did yours!)

      >>I should have made that more clear in the post.

      To clarify, your post was exactly as clear as it needed to be. I had been thinking about “game theory as metaphor” problem for other reasons. Your post, as I said, happened to offer an unusually clear case of the error without any of its usual harmful consequences… which could confuse the point I was trying to make (i.e., make it difficult to distinguish the error from the harm) and make the person in error more sensitive to the criticism than necessary. You seem to have taken my point in the spirit in which it was intended, as I had hoped.

      That said:

      >>I didn’t intend a metaphor, but a strict gedanken model.

      … to the extent that there was no metaphorical intention at all in your post, I can say that I, at least, might have had an easier time following your argument to its conclusion if you had provided more detail about the formal model. This is not necessarily a bad thing, though! What makes something more clear to me might make it more obscure to someone else, and on top of this there are questions of efficiency.

      At any rate, if there is an explicit game-theoretic model underlying your position, I know that I could, when the time is right, elicit more details from you and gradually get a better picture of what you have in mind. But I’m sure that if I wait patiently you will elaborate this model on your blog eventually.

      Like

    1. This is a topic on which I may submit something to Jacobite: exit is always conditional on non-exit; there is no way to exit from being. Exit wrt domain X always entails greater dependence on some domain Y, simultaneous exit from X&Y entails greater dependence on some Z, usw. In effect “entrance into domain X” is usually tantamount to “having X-type options” and, while foregoing certain options can be a valuable move, it is always a move that is dependent on the value of your *remaining options*, after exit. What goes for entrance/exit goes equally for trust/trustlessness. Designing trustless institutions is an amazingly fun intellectual game, but in practice you only have a certain number of degrees of freedom, and when you jettison one type of trust you become more dependent on another.

      Liked by 1 person

      1. well, the point being made there is that competitive systems don’t *demand* trust (but certainly they can earn trust, given their performance). what you saying is that you can only not trust a company because now you trust the market?

        Liked by 1 person

      2. Right. For example, why would you want to trust your company to pay your pension after you retire? Awful! Let’s go for a “trustless” system where your retirement plan is to be supported by [your kids; Social Security; treasury bonds; a portfolio of investments; your church]. Oh wait, now suddenly you need to trust [your family; the voters(!); the Fed(!!); your broker; your neighbors]. Try to wrap any of these obligations in legal contracts, and suddenly you need to trust your lawyer (!!!); band together with other people in a similar situation so you can negotiate and enforce the contracts together, and you’re left trusting union bosses (!!!!?!!).

        Trust doesn’t go away, it just gets reshuffled in different combinations.

        Liked by 1 person

      3. my company [my family, the government, banks, brokers, neighbors, lawyers, union bosses, et al] becomes more trustworthy if it exists within a competitive system that ruthlessly eliminates those underperforming. this system doesn’t demand that I trust for it to work, but if course it earns my trust insofar as it works (and so do those that survive within it)

        Like

      4. Hmmmmmmmm. It’s easy to model dynamics where markets lead to more trust. And ofc in the simplest forms of those models, the rational-economic-agent who is more trustworthy due to those dynamics is the *only* kind of agent. But it’s also easy to model dynamics where markets lead to less trust. Ruthless elimination of underperformance evolves mimicry just as easily as it evolves signaling!

        (Also, a slight ambiguity: your distinction b/w “demanding trust” and “earning trust”, which I take to be a distinction between reasons-for-trust and incentives-to-behave-trustingly, is a shrewd move, but it then leaves you with little to say about situations where the market “earns your trust” in the sense that it provides you with the right incentives to act in a trusting way while it provides your counterparties with incentives to defect on you.)

        Liked by 1 person

      5. 1) well, an underperforming lawyer is, minimally, one which will not deliver the service he’s promised. in which model of the market does he stay afloat for long?

        2) the distinction is between needing trust (beforehand) to work, and delivering trust as a product of its workings.

        Like

      6. 1. If the market for lawyers is broken, what are you going to do after you pay your lawyer and he screws you over? Sue him?

        2. I’m sure you’ve done equilibrium analysis before!

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s