If you have never gotten around to it, this would be an auspicious weekend to read Flaubert’s Hérodias.
Courtesy of history’s most underrated musical collaboration:
If you have never gotten around to it, this would be an auspicious weekend to read Flaubert’s Hérodias.
Courtesy of history’s most underrated musical collaboration:
The beginning of Kristor’s recent post on categorical imperatives is excellent. However, as he begins to drift into the language of game theory to explain the importance of honoring God and kin, he makes a few points I would like to take issue with.
(If you did not follow the link: note that Kristor prefers “profect” to “cooperate”.)
I. This seems inaccurate:
Defection is all that the defective mind can see. So the Marxian mind sees all profective moves as veiled defections. Only defective minds – minds themselves immured in the strategy of defection – could find this account of human life convincing.
Highly coop-coop environments have second-order dynamics that channel possible defections or partial defections into hypocrisy, pharisaism, and so on. In a degenerating society, cooperators will tend to assume that anyone who wants to defect has already defected, because the costs of defection (and in particular the costs of open defection) are low anyway. But in a highly coop-coop society which has a known level of (unidentified) defections, cooperative people will start to become aware that some of these defections must be the work of people who appear to be cooperative; i.e., they are following the strategy of defecting wherever it is safe to do so. In fact, some of these veiled-defectors may be cooperating specifically in order to lull their targets into a false sense of security!
This relationship between collective cooperation and hypocrisy appears at the social level, but it is even more striking when looking at institutions within society. Obscure cults or associations can rely exclusively on the tiny minority of the population who share an obsessive dedication to the group’s ideals. So long as it still has a reservoir of outliers to draw on and its own irrelevance shields it from the notice of amoral sociopaths, a growing institution can count on high levels of internal cooperation.
But as an institution grows, it attracts more attention and eventually outstrips the supply of idealists. It will soon have ordinary members who are “in it for the paycheck” but try to blend in with the institution’s culture and its pieties. Eventually, it will attract sociopaths who blend in with its culture precisely to deceive the remaining idealists (and the clock-punchers who look to idealists for guidance). If a sociopath succeeds he can maximize his share of the power and influence, since the institution will try to entrust the power it is amassing to those most likely to cooperate.
So even if we refrain from making any particular assumptions about the psychology and morality of Kristor’s cooperators, we can see how they could fall into skepticism and cynicism about the actual motives of those who surround them. They don’t know who the hidden defector is — it could be anyone! From there it is only a short step to It could be everyone, if one remembers that many people following the “defect wherever it is safe to do so” strategy may never find a chance to defect safely and secretly.
II. But we can go further. Note that Kristor describes responding to a defective environment with cooperation as “simply stupid”.
In defection, it is simply stupid to proceed other than by defecting at every turn.
This is the exact rational egoism the bolsheviks are attributing to law-abiding bourgeois! “I’ll cooperate with you so long as cooperation helps me, and I’ll slaughter you when it starts to look like things are going downhill.”
I had a debate with Jim a few weeks ago on a similar topic. Jim can’t quite accept that ordinary, reasonable people want to see signs that you see them as “on your team”, as an amicus rather than a hostis. Game theory is hard even in a lecture hall. It’s much harder when you extend the models to include the sorts of wrinkles which affect real-life situations; and hardest of all is figuring out what kind of situation you’re in, from a game-theoretic perspective, as it is happening to you. (And remember: in most games it matters whether you expect your opponent to play intelligently or not!)
If game theory is not always easy, and if for people of average intelligence it’s never easy, then the strategic arguments in favor of the (cooperate, cooperate) equilibrium become massively less convincing when the equilibrium you are discussing is very fragile.
Infinitely iterated prisoner’s dilemmas do have a (cooperate, cooperate) equilibrium, under certain reasonable assumptions, but how many infinitely-iterated social interactions have you been in? Even life-long interactions are slender threads in the overall fabric of social life. You can finesse this shortage of infinitely-iterated interactions by saying instead that the players don’t know when the interaction will end… but realistically, how often are you in interactions where you really have no idea whether or not this will be the last interaction? You might reply that while certain interactions are predictably short-lived, the reputation you gain in them affects your other interactions (and in particular, affects whether others want to enter long-term interactions with you); fine, but what percentage of your one-off interactions affect your long-term reputation? And how do you know whether the interaction will affect your counterpart’s reputation? (That’s a necessary part of the strategic argument for cooperation, too.)
Game-theoretic arguments for moral behavior are, I believe, generally correct, but such arguments are surprising fragile. They hinge on whether other people will consistently find them convincing and, worse still, whether they will be able to consistently tell whether other people have found them convincing. (If I own railroad tracks that run by your property, am I “defecting” when I have a train drive by you, or are you “defecting” when you threaten to try to prevent me? What if the whistle from the train wakes you up in the middle of the night? What if the soot from the locomotive is making your children sick? What if the sparks from the tracks started a fire on your property?)
One source of fragility is the players’ total indifference to each other’s welfare. The belief that you want what’s good for other people prima facie, before entering into any considerations about whether it’s also good for you, is a powerful lubricant for social friction. For one thing, it means that a final-round defection in a short-term interaction (from which both players can iterate backwards and decide to defect in the first round) is no longer a big threat. For another, it means that tail-risk driving lots of social conflict (the risk that your counterpart will screw you over big-time for a trivial gain, simply because he has no reason not to) can be bracketed.
There are some (narcissistic but not malevolent) psych disorders where people intentionally screw up, hurt themselves, hurt others, just to see the reassuring evidence that their friends and family still value them when they’re not perfect. This is pathological, but the exception probes the rule: everyone needs love and respect, most people can’t navigate through coordination problems without it, and this need is so powerful that a tiny minority have a self-destructive drive to defect to feed it.
Paradoxically, this means that the type of community where the strategic equilibrium of (cooperate, cooperate) is stable will itself be quite fragile, because a community whose members care about each other so much that they will continue to cooperate in the face of signs of defection is ripe for invasion/exploitation by sociopaths, vampires, and callous outsiders. Conversely, the type of community where strangers typically view one another as cattle is likely to remain that way, but any of its ongoing cooperative interactions (particularly interactions between ordinary, not-so-smart people) are quite fragile and likely to fall apart.
The bottom line is you’re unlikely to get a (cooperate-cooperate) equilibrium in any community unless — pace Kristor — you would consider it perfectly sensible to continue to cooperate with a defector, under some circumstances, simply because you care about the defector for his own sake even if he is currently screwing everything up. This is the love of the pelican for her chick; this is the love of the father of the parable for his prodigal son. Perhaps we can also compare it to God’s eternal love for his fallen creation.
III. Finally, a minor point about use of game theoretic imagery as metaphor. Note that Kristor jumps immediately from the “Hobbesian war of all against all” to “the defect ↔ defect game phase” without specifying which game we are talking about. Presumably he means the Prisoner’s Dilemma. Probably he assumes everyone in his audience has seen PD juxtaposed with Hobbes’ state of nature before, and will draw the connection; or maybe he assumes none of them have ever heard of any formal game besides PD.
Any NxN game can be described in terms of cooperation and defection so long as (cooperate, cooperate) is better than (defect, defect) for both parties. The actual pay-off values of the matrix determine whether the game is a Prisoner’s Dilemma or some other game. If, for example, 1’s pay-off for (defect, cooperate) is lower than (cooperate, cooperate) but higher than (cooperate, defect), he is in a Stag Hunt. Strategies that are dominant in one game may not be in another; for example, defect is the dominant strategy in PD but both (cooperation, cooperate) and (defect, defect) are Nash equilibria in SH, so neither strategy is dominant.
Or I should say: defect is the dominant strategy in a single-iteration PD. This is really my point. Even if you do specify what game two players are playing, knowing the pay-off matrix really only gives you a superficial understanding of the strategy of the game if you do not specify the parameters of the game in question. How many rounds the game has, how much information the players have about their counterparts, how the counterpart is chosen, and how the results are aggregated all strongly effect which strategies “work” and which don’t. Maybe this is too much info to present to a popular audience… but if your point doesn’t even depend on which game you’re talking about, why use the game-theoretic metaphor?
In Kristor’s case I don’t think this really undermines his point (although admittedly, I’m uncertain how literally I should try to draw out the game-theoretic implications of some of his claims). That’s why I’m bringing it up now, because Kristor is clearly comfortable with game theory and is not in any way using the language to mislead or obscure. If someone actually did use clichés and metaphors from game-theory in an unintelligible or counterproductive way, I would be perplexed about how to approach the problem without insulting him; so it’s best to flag the issue in this very benign case, to head it off elsewhere.
An example of the end-result of “game theory as metaphor” is the fuzzy use of “Schelling point” in the React-o-Sphere. A Schelling point is a feature of coordination games where:
The last point is the key. Schelling describes experiments where he shows soldiers a map and says “If you parachuted down onto a random location on this map, where would you go to rendezvous with a second parachutist, assuming you couldn’t choose a meeting-point in advance, don’t know where the other man landed, and have no way to communicate with him?” If the map has three hills and a bridge, the soldier decides to go wait for the other man on the bridge. If the map has three bridges and a hill, the soldier decides to go to the top of the hill. Whatever feature on the map is uniquely salient — whether due to its centrality, its symmetry, or simply because there’s only one of it — is the Schelling point. Both parachutists will expect the other to head there first, because it’s the only point on the map which doesn’t have corresponding points with an absolutely equal claim: even if they both thought the other would go to one of the three bridges, there is still only a 1/3 chance they pick the same bridge.
(If one of the joint-payoffs in a coordination game is best for both players, it’s no surprise when they independently choose the two strategies which lead to that payoff. Thus condition #5; no Schelling point unless there’s no best payoff. But this doesn’t mean that the payoffs all need to be identical; the reason for the complex qualification to #5 is that the most salient feature of a certain strategy might be that its pay-off is lower than all of the others! If the possible payoffs on a 6×6 matrix are (100, 100), (100, 100), (100, 100), (99, 99), (100, 100), you can bet that the players will spontaneously converge on the worst payoff.)
People on the Right say “Schelling point” a lot. I think most of the time they mean “equilibrium”, or “Nash equilibrium”. Sometimes they mean “dominant strategy”. Frequently there is some kind of connotation of the emergence of spontaneous order over time. It’s often quite unclear. Usually it doesn’t completely obscure the argument. But it certainly doesn’t clarify it; and over time it creates uncertainty about what the heck other people mean by “Schelling point”, and how they will understand your use of it.
If we couldn’t communicate with one another about how to use “Schelling point”, then the “Schelling point” Schelling point would be to use “Schelling point” to refer to Schelling points. This strategy has the unique virtue of being the original definition of the term, and also has the strongest relationship to the work of Thomas Schelling. Since we can communicate about this choice there is (sadly) no “Schelling point” Schelling point; rather than coordinating our language based on salience we can discuss the pros and cons, and gradually modify our usage over time. So let this be my contribution to the discussion: use game theoretic terms when you have a model in mind and need terms to illustrate it, but when you need a metaphor go elsewhere.
I noticed an incoming link to Babies, Families, and Status-Signals. Rereading that note inspired the following thoughts:
By the way, this is probably as good a time as any to mention that Quas Lacrimas will likely go into hibernation later this year. If this does come to pass, nothing is amiss, do not be alarmed; expect QL to emerge from hibernation in mid-2018.
Entities are generally thought to be different from propositions. Propositions are mere words — puffs of air, pixels organized alphabet-wise — which refer, in turn, to “ideas” or “beliefs” which are even less substantial. But both propositions and entities can have real consequences. (To be more precise: both the existence of an entity and the truth of a set of propositions can lead someone to draw inferences about the course of future events. When hindsight reaffirms his predictions, he will generally take it as proof of his good judgment.) Any type of event which is a consequence of an entity’s existence could also, in principle, be a consequence of a set of propositions, and vice-versa. Moreover, for any particular entity-consequence relationship it is always possible to rearrange the relationship in the form of a set of propositions that makes no reference to the existence of the entity.
(Corollary to IX: As a species of error, a metaphysical illusion amounts to nothing more than an incongruous set of proposition.)
It follows from -IX- that when two groups have an irresolvable disagreement about the world, metaphysical principles about what sorts of entities are possible are never at the root of the dispute. Beliefs about effects of “spooky” entities are no different from beliefs about consequences of sets of propositions, grouped/labelled/reified, so the accusation that one’s opponent accepts the existence of non-existent entities does not in itself identify an error he has made or explain the crux of the disagreement. Whatever error there is in his position, it would be equally evident if his position were framed relative to entities whose existence you not only deny, but deny a priori, or relative to propositions which you reject.
When I say that a certain object is an X, not a Y, but I can’t point to the practical difference it makes whether the object is X or Y, the distinction is not for that reason metaphysical, “spooky”, or meaningless. Perhaps I can’t even come up with an example of an existing object which is a Y, rather than an X. Perhaps I can’t even explain the meaning of the definition of Y to you. Conceivably both the existence of some Y and the meaning of the definition of Y are beyond our individual cognitive powers (mine and yours both), so we could never find a tangible example of a Y or an intelligible (to us) definition of Y, no matter how hard we tried. Nonetheless, a distinction will endure in the face of our ignorance of it, and if the distinction has material consequences its effect on us will not be less because of our blindness.
Pointlessly convoluted theories are often described as having “epicycles” — particularly when the convolutions are added in an attempt to squeeze new, and unanticipated, data into an old theoretical framework. But the problem with epicycles as a feature of Ptolemaic and Copernican cosmology wasn’t that they were wrong. Many excellent theories have failed to obtain anything close to the arbitrarily-good fit to the data which epicycles allowed. But a good theory should generate insights and techniques which go beyond the narrow question of prediction/observation fit in one narrow domain of investigation.
”Values” start out as equivalences. “Equivalent” means of equal worth. If I assert that two things are equivalent, I imply there is some context in which the one is as good as the other. What things we are willing to admit as equivalents ends up determining our principles and our objectives, what we find fascinating or irrelevant, and even what we judge plausible or implausible, true or false. In realizing that equivalences have this effect, the word we use to describe the equivalences in light of their effect (first “values” or “valuations”; later “evaluations”, “worldviews”, “interpretations,” and many others) undergo a traumatic metamorphosis under the heated pressure of social scrutiny. Once it is understood that men who use different systems of equivalences in weighing different objects and situations are often led by these equivalences towards different conclusions about moral, political, and metaphysical principles, people begin to treat the equivalences as though they were simply a different way of stating the principles they cause their adherents to accept. A certain label (initially, “value”) continues to designate the equivalence-proposition even as people begin to treat it as a statement of moral principle, which changes the received meaning of “value” to the point that people look for a new word to denote equivalence-propositions (e.g. “interpretation”), which immediately begins to suffer the same fate.
Everyone holds grudges. Grudges bias perception. Even in unusually clear-eyed cases, people use inference/loyalty heuristics which are based on their past history with others. Personal record, reputation, status, these all matter. To unwisely forgive is not only dangerous, but invites contempt. But who we trust determines who we hear good and bad things about… which determines who we trust in the future. Meanwhile, each advisor/informant has his how history and his own rivals whom he distrusts, and his own colleagues he relies on. To be accused of bias does not mean you must deny the bias or that you must turn back time to undo the bias. Rather, you must be aware of the structure of authority and faith so that you are equally understand how your faith in men can get out of sync with what they deserve, and how to adapt when you recognize a misalignment.
There are no general considerations about the nature of knowledge or reality which should bias us towards tardy or faint-hearted decisions. Anything worth doing is worth doing well. If you hesitate before stepping out onto a busy street, that is because it is far better to stay on the curb if the alternative is getting hit by a car. Any general account of the epistemological uncertainty involved in crossing a street at a walk signal, amid a crowd of pedestrians, while all the cars are stopped at a red light must draw a distinction between the metaphorical “uncertainty” of this second situation and the literal uncertainties involved in the first situation.
More generally: in any domain where there are incompatibilities between the beliefs of the various participants in the domain, a skeptic must be able to distinguish between having reasons to be uncertain whether X or ~X, and being uncertain whether one’s reasons (whether they entail X, ~X, or uncertainty) are better than the reasons which have led another to a different conclusion. Imagine different groups of survivors setting forth in life boats from a sinking ship: some of the life rafts may believe the nearest land is to the north and head that way, while others head west, but having chosen one or the other it is critical that each group of survivors pursue its path steadfastly. The survivors express their uncertainty, not by zigzagging back and forth and making no headway, but by hoping that if they perish, their “rivals” at least will find dry land.
Self-deception is the psyche’s way of purging elements that it no longer wants — or at least getting them to go along with what could turn into preparations for a possible purge. People are accustomed to thinking of self-deception as a special case of deception in general, with the special goal of hiding all signs of mendacity. Perhaps this is sometimes the case; but the most common problem that thinking organisms face is inconsistent bundles of judgments and/or desires, which lead to irrational behavior in all its glorious self-destructiveness. But you cannot simply decide to get rid of ill-fitting judgments and desires, anymore than you can simply decide not to want what you want, or not to believe what you believe. If the psychic element is independent enough to resist dismissal, it is strong enough to avoid an environment which will remold it, as well. Thus the need for secrecy and misdirection in psychic life.
(The same applies to groups as well, in a straightforward manner.)
In all the disputes and finger-pointing over the “Crypto-Calvinism” hypothesis, the original Moldbuggian insight gets obscured — for whatever reason (never mind what), some sort of sect (never mind which) adopted the following platform:
Intentionally or not, this sect had just mutated in a way that just happened to circumvent the separation of Church and State in America, allowing it to strictly dominate all other sects in the competition for power, prestige, and followers; for in all minor matters the mutant sect was like the legacy Christian sects, whereas any difference was strictly limited to that which allowed the mutant sect to invoke the power of the U.S. government.
(To explain that this sect mutated in a way that exploited the constitutional separation of Church and State begs the question of how and why the United States just so happened to have Church-State separation of a certain form at a certain time; certainly, individual states still had established churches when the Constitution and the Bill of Rights were ratified, and the relevant article was not amended in the interim. But we must leave this for another time.)
The success of this cult suggests another, more general rule:
Whether the group actually shares a common theological position is irrelevant; to succeed, they must organize themselves around a formal institution which denies the existence of such a position. In short, in America any Church worthy of the name must pretend to be some sort of secular NGO, PAC, or obscure political party. To set up the “occasionalism” which coordinates the group’s original position and the public position it adopts in its organizational form, all sorts of additional rules (likely inconvenient rules which over-burden the collective decision-making process, and occasionally lead to inconsistency or indecision) and superfluous personnel (effectively the “conscience” of the organization, a.k.a. political officers) must be instituted. To make the official secular platform of the group arbitrarily complicated (to give the group elastic authority to exclude heretics and expel apostates) would also be helpful.
As inconvenient as such measures may seem, they are strictly necessary, for once a group transitions from “religion” to “secular public interest group” it will no longer have any protection from the onslaught of the dominant progressive sect, which currently controls the levers of power and uses them to inflict its principles on all other organizations. This is how the game is played. If you want to seize political power you must first survive its use against you.
In fact, the same logic implies another, complementary general principle:
Whether the members actually share any theological positions is irrelevant; to succeed, they must organize themselves around a formal institution which insists that do in fact share a common theology, and that this is their primary reason for associating. This is the only way to protect the organization from the grasping ideology of the progressive state, a cult which is jealous of the authority of all other institutions but which still must limit its interference in officially recognized religions.
This is where we are in The Current Year in the United States of America. Fair is foul and foul is fair.
[Continuing from here.]
The guiding principle of “memetics” was, originally, to find a conceptual tool that sane, rational people (people like Richard Dawkins!) could use to help enlighten superstitious yokels who were still clinging to religion. The idea was that if someone suffers from a delusion, part of the delusion is that they aren’t suffering from a delusion, so telling them that they are deluded (equivalently: wrong, mistaken, in error, inaccurate, irrational, dumb) isn’t going to work. A delusion typically extends to all the mean things you can accuse the delusion of being.
But if, instead, you could very, very carefully describe the idea of a mind-virus to the yokels, and show them how a mind-virus would infect its host and how contagious it might be and how remorselessly it parasitizes human beings to ensure its own continued replication, this might get in under their defenses. You see, the mind-virus can’t actually tell the host it’s a mind-virus (that’s the sort of thing it has to do to protect itself!), so when the mind-virus says “Don’t believe anything bad they say about me!”, this about me cannot, perforce, extend to mind-viruses, since the host does not realize the mind-virus is a mind-virus.
So the host cheerfully learns about mind-viruses… their key similarities to protein-viruses, how they work, their distinguishing characteristics… and then out of nowhere one day he’s mentally reviewing his system of beliefs (or something) and AAAAGH IT’S A MIND-VIRUS, where the heck did that come from? Or that is what Dawkins is hoping form: having familiarized himself with the concept of a mind-virus, Dawkins’ target is finally able to recognize his parasite for what it is, and start to struggle against it.
Curtain falls. Applause.
So memetics was originally intended largely as an attack on religion which would circumvent the adaptations the faithful have built up to attacks that are framed as attacks on religion. (Calling the mind-virus a “meme” is another layer of clever misdirection, shepherding the target towards his ultimate deconversion.) But we must pause for a moment and ask: cui bono? Dawkins et al. imagined they would be attacking religion on behalf of whom?
Or: on behalf of what?
Well, not on behalf of anything. For an enlightened, secular liberal like Dawkins, a caring man who believes in progress, autonomy, and rationality, freeing people from religion — curing them, really! — is simply a matter of principle. For anyone who respects the inherent value of liberty, autonomy, and enlightenment, attacking religion is a sort of absolute duty.
This is a roundabout way for Dawkins to say: “I am being forced to attack your principles on behalf of my principles.”
So long story short: Dawkins principles force him to attack Christ. Contemplating the nastiness of Christianity (at the behest of his principles) and searching for more damaging tactics to use against it (under pressure from his principles), Dawkins hits on the brilliant idea: These yokels are practically diseased… Christianity is like a virus, they just can’t see it yet because they’re so deluded an obsessed… if I explain to them the idea of a mind-virus, at the end they’ll have to recognize it for what it is… and they won’t know it’s an attack on Christianity because they won’t realize it is a mind-virus until it’s already too late!
Forcing this sort of crisis of recognition on an opponent is one of the oldest tricks in the book. Socrates was a master, but Horace [lat] wasn’t so bad either. The tactic is especially satisfying if, at the moment just before recognition dawns, one’s opponent is still smug, still lacking any self-awareness, and if his face contorts directly from contempt to dismay as he realizes that he is the intended target.
The one problem with this kind of approach is that, until one party cracks, both are confident and unsuspecting. One of them is over-confident. Could be me, could be you. Who knows? The risk you take when you fool around with logic is that one of these days you’ll back yourself into a corner and force yourself to learn something.
The ultimate problem for Dawkins’ witches’ brew of bolshy principles is that they too constitute a mind-virus. They too persuade the host not to reject them or think ill of them. And they too withhold from their host their viral nature, and so they cannot prevent him learning dangerous things about the nature of mind-viruses.
I have the vague impression that the reason “memetics” lost its conceptual punch in the public sphere was that it was originally trendy when people perceived it as potentially anti-Christian but when the atheists realized with shock that atheism, too, is a meme, they decided to stuff memetics deep in their sock drawer and forget about it. Maybe I’m wrong about that. Either way, Dawkins at one point intended “memes” as a stalking-horse for “religions”, and in the end memetic analysis turned out to be a much more powerful weapon for Christians to deploy against secular liberals than vice-versa.
This is a long and highly-schematized version of a subkernel running amok. The secular-liberal kernel does not instruct its hosts to devote time to mastering and sharing a body of knowledge which portrays the secular-liberal kernel as fundamentally similar to that icky, contemptible Christian kernel it has been trying to stamp out. It does not instruct its hosts to study the unattractive ways in which it perpetuates itself, defends itself, and protects itself; in particular it does not direct their attention to how they, the hosts, fare over the course of all this self-promotion. All the secular-liberal kernel does is say (a) I’m not a mind-virus, and (b) Go attack that yucky icky low-status mind-virus over there. That is enough to inadvertently direct the host to receive, relay, and even research facts about mind-viruses that ultimately weaken the secular-liberal kernel, or even move the host onto the path to deconverting.
My point isn’t about secular liberalism or atheists, or even about the tactical value of memetic doctrine. It’s much simpler than that: if something as stupid as a glitch which arises from meme-induced self-deception can inadvertently, against all of the goals which the kernel (and more importantly, its selfish component memes) directs the host to pursue, set the host on the path to rejecting the infection, then there are undoubtedly many paths leading to a substantial modification of some parts of the kernel which promote the goals which the kernel presents to its host as important.
This is why its worth thinking about the endgame. The socio-political outcome matrix for restoration has many sub-sections. It’s unrealistic (stupidly unrealistic) to think that everyone can win on every question. On any given question, some people will readily compromise to advance a general victory for the Right, and others would rather defect to the Left and suffer slow suicide under progressive toleration than see their pet issue go down.
Strategically, those people are boring. They are held constant, so to speak. The question is, what will everyone else do? These are the people who might under some circumstances be willing to give in on that issue (or at least, are willing to accept an outcome with some discrete chance that they will will lose), but under other circumstances would shirk or defect just in order to get their way on that one issue. These are the people who can be brought around to accept the inevitable necessity of conformity with grace, in advance, rather than only belatedly, when coordination and cooperation no longer make the difference between restoration and Cthulhu.
(Oh, and in general don’t pay any attention to whether people say that a certain issue is negotiable or non-negotiable. (A) They’re lying, (B) half the time they’re telling lies they’d be ashamed of if they spent two minutes thinking about how many people similar to themselves reconcile themselves to far worse setbacks, and (C) it’s fine that they’re lying because presenting yourself as an unstable, flighty partner is just a standard opening bid in any sort of collective action problem. — The question of how to get people to want to present themselves as stable and reliable is an interesting one with important connections back to the subkernel issue.)
(a) I probably won’t remember to keep checking the SM comment thread all week, but if you comment here or e-mail me I’ll almost certainly respond.
(b) Comments were good. Most of them wanted to move away from the theoretical frame of the article towards particular cases (like “I’m a Zoroastrian, this doesn’t apply to me because of XYZ facts about Zoroastrianism”). In many ways it’s a different conversation, but it’s good to know a lot of people are looking for an interdenominational cage-fight… y’all want red meat!
(c) Hadley quite rightly replaced my original bland title with one that got to the heart of why you should care about kernels and subkernels: Rules for a State Religion. But having gotten all of the people intrigued by “state religion!” into the room, the full scope of the argument may have been ellided. Hmm – or maybe not. I’m really judging this based on the commenters’ reactions, but you commenters are a tiny fraction of the total readers; and it makes sense that only people incensed by the hot-button issue (state religion) would comment. Either way, post-restoration religious uniformity is the paradigmatic case for the development of subkernel(a, b) but is still only one application among dozens; and if you think there is something defective or fallacious about the general argument it’s probably not a defect that the details of your religious confession could clear up, because those can’t possibly invalidate the logical form of an argument which applies equally to convergence in beliefs in various domains.
When Razib Khan says the sky is falling, it’s probably time to seek shelter.
The darkness you perceive in my soul is that I suspect that the liberal order, which encompasses politics as well as the intellectual world we’ve cherished since the 19th century, is collapsing around us. Just as the Chinese in 1790 or the Romans in 460 were not aware that their world was coming to an end, we continue to carry on as if all is as it was. I’m sort of at the phase between the death of Optimus Prime in the 1980s cartoon and the emergence of Rodimus. I’m not going to turn into a bald-faced liar or ignoramus like so many of the people in the media around us just yet though (you know who I’m talking about I’m sure). Old ways are hard to give up! God has died but his shadow haunts me.