(Discussion of “Defection”)

There were several useful comments on Defection. Vincent notices parallelism between my implicit theory of institutional cooperation and the alternative hypothesis offered by Venkatesh Rao. If you were going to read just one internet essay on institutional group dynamics, though, I would point you to Geeks, Mops, and Sociopaths. (Or if you want a link to an organic, homegrown internet essay on institutions, QL has its very own Machiavellianism series.)

CyborgNomade draws the connection between the strategic problems of social cooperation and an old Nick Land post on trustlessness — particularly timely now that Land is working with the exit-themed Jacobite-collective. I want to write something on the concept of exit soon, but once it’s written I will probably offer it to Jacobite first, so you will not see it appear here for a long time. For now you’ll have to be content with my reply to CN’s comment and my review of Davidson & Rees-Mogg.

Kristor, meanwhile, was kind enough to respond at length to my observations, and the purpose of this follow-up post is really to give our conversation a proper home. If you found my post useful, read this reply to it as well.

Kristor makes a several points about game theory with which I agree.

  • He notes that people use induction from past experience, such as previous defections, to predict future results (not always wisely, or with happy results).
  • He concurs on the importance of describing the parameters of games (i.e., strategic interactions), and emphasizes that community-size is one of the most important parameters. Indeed; and this is one reason why even urbanites (especially urbanites!) look for peer-groups/tribes of 20-1000 people.
  • He accepts my point about theory and practice, and extends it to an analogy between lecture-hall kinematics and lecture-hall game theory, an analogy which I accept — although I would draw a slightly different conclusion, namely that while people have excellent heuristics for navigating physical and social obstacles without explicit thought, they start tripping over their own feet as soon as they start trying to improve the heuristics by overriding it with theorems in order to cover cases which were rare or unimportant in our evolutionary history .
  • “After a sufficiently large number of generations, counterparties superficially identifiable as fellow nationals are relatively safe bets even if the details of their personal histories are obscure.” This is an important point, which should be better understood. This Axelrod paper on ethnocentrism makes the strategic case (and also this paper, which repeats the main point but has better graphics), and Darwinian Reactionary’s classic “No True Scotsman” post makes the epistemological case with a justification of descriptions like “superficially identifiable as (a Scotsman)” (i.e., of ethnic kinds).

In passing, Kristor mentions that “all models [are] a species of metaphor, and edifying qua metaphor only in virtue of some formal similarity between the model and the system to which it refers – which is to say, verisimility of the model.” I would not describe models as metaphors per se. A metaphor lacks literal meaning; a metaphorical figure is precisely one in which the literal meaning of the words is irrelevant to the point I am conveying. A model has a literal meaning, and its details carry precise implications. That the model only approximates that which it models does not make it a metaphor. Both measurements and approximations express a literal claim about a magnitude, but with different degrees of confidence; metaphors are not less-exact alternatives to accurate measurements, but attempts to express something which cannot be phrased in terms of measure.

Despite these quibbles, I take his point to be that the map is not the territory, and I agree entirely. A few months ago I wrote a continuation of Tribalism: A Model, which opens with some reflections on precisely this question of verisimilitude which Kristor raises; pending its appearance on QL, I will only state that the excellence of a model does not lie in reduplicating reality but in reducing it to its essential forms. Having modeled the interaction of certain formal principles, the next challenge is to figure out which phenomena in our world, if any, these forms animate, and then to extend the simplest form of the model until its approximations have the desired level of accuracy.

I will bring up one more general point before turning to the substance of Kristor’s comment. In discussing the need for parameters that specify e.g. whether players “remember” past interactions, Kristor says:

In such games[,] strategies that require historical information – Tit for Tat or Tit for Two Tats – can arise. “Historical information” is just a way of saying “reputation.”

Kristor likely knows this already, but the reader should note that even the availability of “historical information” is open to further specification. Does the agent remember only the most recent interaction? His last N interactions? All of his previous interactions with his current counterpart? All of his previous interactions with members of the counterpart’s “tribe” (but without any distinction amongst them)? Every interaction he (or his partner) has ever had? These lead to different strategic equilibria. (See the two links I provided to articles on ethnocentrism for examples.)

So there is no single version of “historical information”; therefore, I would not try to collapse historical information into reputation. Rather, I would say different reputation-systems are alternatives to historical information. They attempt to provide a different solution to the same problem, while routing around the epistemological and strategic difficulties of knowing everything that might help guide one’s strategy. (If you read the DR/E post I linked to, on “No True Scotsman”, you’ll see that assigning people to ethnic kinds functions in the same way. The stereotype of a Scotsman can successfully make inferences about a particular Scotsman without any biographical information about him.)

Now, onto the substantial question: what makes cooperative societies possible, what vitiates cooperation? Kristor cites this passage, from his original post, as an illustration of the “pelican principle”, i.e. that mutual respect and fellow-feeling lubricates social friction and removes obstacles to the (cooperate, cooperate) equilibrium:

Thus the crucial importance of honoring parents. Filial piety entails honoring siblings, and their spouses and children. It entails honoring all your kin. On that basis, only, can a generally profective society be built. Profection is an artifact of kinship.

This claim is the quintessence of Kristor’s original post, and what it means is the real question. Because, of course, tight families in themselves are not a roadmap for a society-wide (cooperate, cooperate) equilibrium. Sicily is the paradigm: tight families, a corrupt society, and the (defect, defect) equilibrium. Indeed, many (including Saint Thomas) believe that the great accomplishment of the medieval Church was breaking up tight-knit family structures wherever its power was strong, by enforcing a hyper-strict version of the Roman law on incestum. As Western Christians became less genetically tied to their clans relative to their neighbors, the new diffusion of social obligations and biological interests led to greater levels of altruism: (cooperate, cooperate).

(Recall CyberNomade’s comment, and my reply: more trusting interactions with Peter can make my interactions with Paul trustless, or vice versa.)

My heart says that Kristor is right: the family is the nucleus of civilization. A strong family structure, the obligation to honor one’s parents, and filial piety in general do promote cooperative societies. But why? It is no surprise that, as Kristor notes, “mediaeval knights took into their households the sons of their brothers-in-arms, and married their daughters to their friends, their lords, and their vassals.” Hostage exchange (trust but verify!) and diplomatic marriage are fine ways to commit to future cooperation! But such commitments are essentially limited in scope: they promote cooperation with some specific rival, e.g. the father of the squire or of the bride. (Well — up until Christianization these Germanic princes actually entered into as many diplomatic marriages as they had political allies, but you see what I mean.)

Minds at or near complete defection are prone to apprehend all acts of others as defections more or less veiled…

Good people and societies can be suspicious, cautious. And so they ought to be: Trust but verify, as a notably canny profector once said; gentle as doves, sly as serpents.

The nature of this “veiling” is the real question! It is probably a deeper question than game theory can solve (a fundamental challenge, not just of ethics, but of philosophy itself). “Trust but verify” gets at the heart of the paradox; it may feel good to accuse others of paranoia, but if you insist on verifying, in what sense did you ever trust?

(I agree whole-heartedly about “gentle as doves, sly as serpents” — that verse was the very first thing that appeared on my blog — but remember that trust-but-verify was the Apostle Thomas approach to faith.)

the Marxian analysis gets [social breakdown] correct in the same way that a broken clock is correct twice a day

This point may help clarify the veiling-paradox: are the Marxists saying that the clock is currently pointing to “heartless slaughter”? Or that the underlying motives and strategies which point to conformity to bourgeois respectability now will just as easily point to heartless slaughter when the time is ripe? In other words, what is being veiled and what is being shown?

To describe someone’s behavior as “veiled defection” doesn’t mean he is actually, currently defecting. (Likewise, describing someone as “essentially a criminal” typically means he has never committed an actual crime!) Instead it means he is following a strategy which puts too low a value on cooperation, that the attitudes and motives which recently produced cooperative behavior are the exact same ones that would quickly cause him to defect under different circumstances.

I should have been more careful to make clear that both minds and societies are located somewhere on a spectrum from perfect defection to perfect profection.

I would agree that this “spectrum” exists in the most general sense but demur if construed literally. Relative levels of defection and cooperation are not a question of degree, where we go from DEFECT=1 and COOPERATE=∞ along a sliding scale of increasing profection. Different people will cooperate and defect in different situations and with different counterparts. (These sorts of strategic compensations are not just conceptually possible, but systematically likely; let me repeat yet again that less trust with respect to X typically goes hand-in-hand with more trust with respect to some Y.) Thus while no particular defector is a defector in the absolute sense, there is no direct mapping of pattern of defection/cooperation onto a scale. Thus there can be no equation that will convert a defective strategy into an equivalently defective spirit, or vice-versa.

I don’t disagree with Kristor that there is some sort of correspondence, and indeed a correlation, between defective strategy and defective spirit. My assumption is that they are correlated. I do worry that the simplest intuitions about the nature of “cooperative spirit”, distinct from any actual cooperation, makes it sound like chaotic good. Pinning down the relation between the two could be an interesting challenge.

 

Advertisements

8 thoughts on “(Discussion of “Defection”)

  1. The thing about geeks, mops and sociopaths really caught my eye. Whatever the game-theoretic considerations may be, in the successful social movement you absolutely need those guys around, since their very sociopathy brings with it political chops that they get from burning with Augustinian libido and as a result budgeting all of their time thinking about how to satisfy it, namely by getting cash, followers and power, whereas the geeks live primarily in the symbolic-mental world of The Thing. A wholesome social system of any type or scale needs all three caste food-groups for a healthy diet on which it can thrive.

    Liked by 1 person

    1. Yeah, I believe that he goes into that in the essay (or at least it’s implicit, and someone else makes that point elsewhere); but the question to ask is “Successful for whom, Kemosabe?”

      From the perspective of some sort of demiurge, the final end of some embryonic social dynamic is to “change the world”. To do that the dynamic needs to scale up, to have “influence” and “make an impact”, and for that it badly needs the sociopaths (or at least, it needs someone to fulfill the function Meaningness ascribes to sociopaths in that model).

      From the perspective of the creator-geeks, there is a trade-off between getting the ideal creative network and (potentially) getting lots of material and social rewards. There is no right or wrong answer.

      For the fanboy-geeks, there isn’t much of a question: “success” for the “social movement” is the failure of everything they care about. (And typically they get burnt when the embryonic stage ends and they lose their ecological niche; their investment of effort was only rational if the “movement” stayed in equilibrium.)

      The sociopaths and MOPs have to side with the demiurge, of course.

      (BTW, if you like this perspective there is another post somewhere 1 or 2 steps removed from the reactosphere which makes a “nerds/geeks” distinction with a similar intent. The model is that nerds create a system to explore its formal properties, whereas geeks want to learn facts about the system and collect physical souvenirs as a form of social interaction. The key is that nerds are scarce relative to geeks and need some minimum number of participants to do system-activities, so they court geeks; but geeks don’t technically need nerds, and are happy to invite as many participants as possible to reap social-relation benefits.)

      Like

  2. They say the Tea Party got ruined by professional fund-raising grifters and vultures precisely according to the pattern outlined in the geeks article. But on the other hand at least the Tea Party saw the inside of the corridors of power, even if just for the proverbial fifteen minutes.

    Liked by 1 person

    1. I wasn’t on the right when the Tea Party happened so I really learned next-to-nothing about it. I had the vague impression that the fund-raising grifters were its original sin: they may have recruited well-meaning senior-citizens who drank the Kool-Aid, but the infrastructure was the sole property of the Koch Bros. from the very beginning, or something like that. No?

      Like

  3. Wow, thanks again for your engagement with this topic. I don’t have any response to most of what you say (other than, “good point; just so”). But some thoughts arose on a few of the topics you touch on:

    … while people have excellent heuristics for navigating physical and social obstacles without explicit thought, they start tripping over their own feet as soon as they start trying to improve the heuristics by overriding it with theorems in order to cover cases which were rare or unimportant in our evolutionary history.

    Yeah: it’s at the margins that the heuristics (of any sort, respecting any topic) always break down, prompting deliberation and refinement of the heuristics. And when you start thinking about something that had always been free of trouble, lo and behold it starts to break down. You can verify this experimentally. Try paying close attention to how you are walking. Dollars to doughnuts, your conscious attention to the hitherto unconscious subroutines will mess them up somehow (this is one of the inveterate effects of consciousness), and you’ll start walking worse than you had been, in some way (albeit perhaps better in some other). Conscious attention *just is* intervention in the system observed (this is a principle, not just of psychophysiology, but of QM). It tends to introduce difficulties of its own (also known as unintended design consequences of well-intended creative novelties)(few of which, in the nature of things, are likely to work out)(this being one argument for Taoist/libertarian/subsidiaritan benign neglect on the part of the sovereign). Some such refinements work, some don’t. Most of them, like epicycles, cope with the marginal outliers and exceptions by increasing computations. Eventually work grinds to a halt as the epicycles compound, and computational load goes to system overload.

    In short: theories are wrong. Good theories are tolerably wrong (and up to some limit of poor performance, tolerability of favored theories can be adjusted upward at will using unprincipled exceptions – black boxes, intentional sloppiness, voluntary lacunae, bad faith, repression, projection, and so forth).

    … tight families in themselves are not a roadmap for a society-wide (cooperate, cooperate) equilibrium. Sicily is the paradigm: tight families, a corrupt society, and the (defect, defect) equilibrium.

    Excellent point. Romeo and Juliet is a fitting parable of the tragic tension between the polis and the constituent clans thereof. How to transcend clan loyalty in some higher loyalty, so that Verona can attain functional profection? By allowing intermarriage. As the play shows, the alternative is some sort of tragic plague on both houses. Once the tragedy is brought to mind, both houses can understand that their fortunes lie in profection with each other. Then, Romeo and Juliet will be allowed to marry; indeed encouraged to do so.

    “Trust but verify” gets at the heart of the paradox; it may feel good to accuse others of paranoia, but if you insist on verifying, in what sense did you ever trust?

    The verification usually takes care of itself. We venture to trust an adversary, and he then either defects, or he doesn’t; and we learn about his trustworthiness in future plays. No special measures of verification are usually needed. Verification is needful when the population of players is large enough that we have to rely on generalized heuristics derived from multiple iterations (i.e., traditions) rather than direct experience of counterparties: brand, reputation, stereotypes, signals (clothing, posture, grooming, and so forth), body language, physiognomy, and the like. Formalized verification is needful when a single defection could be totally catastrophic, as with the nuclear deals that Reagan and Gorbachev were working on. In such situations, completed rounds of mutual inspections substitute for iterations of actual game play.

    … are the Marxists saying that the clock is currently pointing to “heartless slaughter”? Or that the underlying motives and strategies which point to conformity to bourgeois respectability now will just as easily point to heartless slaughter when the time is ripe?

    It seems to me that what the Marxists are saying is that bourgeois respectability is the way that we rationalize oppression. Viz., the vast literature on false consciousness: “People believe they are behaving properly toward each other, when really they are exploiting each other.” Marxists view all trade as exploitative per se. Their postmodernist heirs view *all social transactions whatever* as exertions of power. In the postmodernist limit of the Marxian dialectical analysis, all the words we speak and all the things we do – regardless of how we intend or understand them – are *nothing but* exploitations: defections.

    … both minds and societies are located somewhere on a spectrum from perfect defection to perfect profection.

    I would agree that this “spectrum” exists in the most general sense but demur if construed literally. Relative levels of defection and cooperation are not a question of degree, where we go from DEFECT = 1 and COOPERATE = ∞ along a sliding scale of increasing profection. … while no particular defector is a defector in the absolute sense, there is no direct mapping of pattern of defection/cooperation onto a scale. Thus there can be no equation that will convert a defective strategy into an equivalently defective spirit, or vice versa.

    I deployed a bit of intentional sloppiness in the diction of that passage, because most of our quotidian heuristics for gauging our counterparties are likewise necessarily sloppy.

    A particular play is to be sure either defective, or not; so that for any one play, the choice between defection and profection is quite digital (bearing in mind that there are usually lots and lots of ways to defect, and only a few to profect).

    But people have characters, or spirits as you call them. And there is feedback, as acts engender the reinforcement of their repetition: tactic → strategy → habit → character → tactic. Anyone who has suffered the compounding complexities of subsequent falsehoods to which a lie obliges us can see how a single trivial defection can propagate and compound through the whole system of the psyche, and of its familiar and business relations, ruining them all.

    But then, people are complex, and most of them are engaged most of the time in trying to figure out the right way to proceed in this or that situation. So almost no one is utterly abandoned to evil.

    Thus we can say in the most general fuzzy terms of one fellow, “man, that guy is as honest as the day is long,” and of another, “that guy is totally whacked,” and of a third, “this guy is usually trustworthy.” Then again, people can be totally trustworthy about one sort of thing, and complete flakes about others. It’s fuzzy. Given our computational limits, it has to be.

    A model has a literal meaning, and its details carry precise implications. That the model only approximates that which it models does not make it a metaphor. Both measurements and approximations express a literal claim about a magnitude, but with different degrees of confidence; metaphors are not less exact alternatives to accurate measurements, but attempts to express something which cannot be phrased in terms of metaphor.

    I guess that you meant that last sentence to read:

    … metaphors are not less exact alternatives to accurate measurements, but attempts to express something which cannot be phrased in terms of *models.*

    Definite things all have forms. They all then have complete formal specifications. They are therefore all amenable to formalization in models (at least in principle). But for finite minds, formalization is fiendishly tricky, laborious and time consuming. So we don’t engage in it, much. Indeed, we (quite properly) avoid it wherever we can do so without difficulty. Instead, we gesture in the direction of a formalization, picking out features of one thing that it shares with another we know well.

    Understanding is the experience of realizing that x works like y, or is somehow a type of y, or that x and y are both types of z.

    Consider, e.g., “rut,” “subroutine” and “habit.” A habit is not a subroutine, nor are either of them a rut. Habits and subroutines are not literally deep or long or sticky the way that ruts are. But “rut,” “habit” and “subroutine” all denote a characteristic iterated form of operations. And we could, if we wished, model each of them exactly using (many of) the terms we customarily employ with either of the others. That mapping of terms – from terms denoting the literal properties of ruts to those denoting the literal properties of habits, e.g., or vice versa – is the first step in building a model of one phenomenon in terms of another.

    But nailing things down by use of models is usually too much trouble for our purposes. In daily discourse, we *almost never* employ models explicitly (although they are implicit in all our ordered acts), because they are just too cumbersome. So we employ instead metaphor. If we want to get more specific, we use first simile, then analogy, and finally a fully specified model, employing tightly defined terms and definite relations.

    E.g.: the neural term mapped to “depth of rut” is “synaptic firing threshold over a circuit.” Note that the latter term from a terrifically exact model employs three metaphorical terms that have nothing literally to do with neural systems: circuit, threshold, and firing.

    Summing up then, I would say rather that metaphor expresses something that is somehow *difficult* to express using models, given the situation, resources, and purposes at hand. Models then are not different from metaphors in kind, but rather in degree of specificity.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s