THE UKRAINE WAR. Part 2: Are US bosses mad? Or are they betting on MAD?
I need to go back and watch 'Dr. Strangelove' again...
MAD (mutual assured destruction) is supposed to keep the nuclear peace.
But the US is escalating a conventional war on Russia’s border.
And that is precisely what the inadvertent-escalation theorists say could ruin the structure of MAD.
What happens if the US and Russia have themselves an all-out nuclear war?
Well, consider the inventories:
As you can see above, the inventories of nuclear weapons possessed by the US and Russia, according to the Stockholm International Peace Research Institute (SIPRI), are quite large.1
According to Wikipedia, the United States has 333 cities with a population larger than 100,000. After hitting them all, Russia would still have an estimated 5,644 nuclear bombs fresh and ready to use.
And according to Encyclopedia Britannica, Robert McNamara’s estimate from 1965 was that the US could guarantee “assured destruction” of the Soviet Union with “as few as 400 high-yield nuclear weapons targeting Soviet population centres; these would be ‘sufficient to destroy over one-third of [the Soviet] population and one-half of [Soviet] industry.’ ” And that would leave the US, on current numbers, with a cool 5,028 nuclear bombs, fresh and ready to use.
You get the picture: nuclear holocaust.
In this context, it is perhaps remarkable to you that US bosses have made colossal arms transfers for a Ukrainian counteroffensive on Russia’s border. And perhaps frightening to learn that, in reply, Russian president Valdimir Putin said Russia would station tactical nuclear weapons in Belarus—in July. According to the Belarus president, just yesterday (Tuesday, 13 June), Belarus already has the nukes:
“Belarusian President Alexander Lukashenko declared … that his country had already received some of Russia’s tactical nuclear weapons and warned that he wouldn’t hesitate to order their use if Belarus faced an act of aggression.” (my emphasis)
Aren’t US bosses scared? Why are they seemingly hellbent on escalating against Russia?
Tucker Carlson has colorfully expressed what many—on the left and on the right—have been (less colorfully) implying or saying, including Ron DeSantis, Donald Trump, Robert F. Kennedy Jr., and Noam Chomsky: “This … could easily bring the total destruction of the West. And Soon.” US policy in the Ukraine war, Carlson says, is “complete craziness.” The bosses are “clearly insane.”
That’s one hypothesis. But I find the following syllogism compelling:
Premise: Powerful world bosses have giant bureaucracies (defense, intelligence, diplomacy, etc.) that are always gathering and processing information for them, and these bureaucracies have teams using this high-quality and abundant information around the clock to do historical analysis and sophisticated game theory; so the bosses know more, and understand the world better, than I do.
Premise: Powerful world bosses are playing for world power, the game with the highest material stakes, so they are highly motivated to learn, to evaluate alternative strategies, and to measure risks, which means their bureaucratic decision-making process, which contains internal checks and balances, is unlikely to produce rash, willfully ignorant, or emotionally capricious policies.
Conclusion: Therefore, if the bosses make a major geopolitical move that seems irrational and/or ill-informed to puny little me, it is proper method to first ask whether I am missing something. (A simple application of basic scientific humility.)
My recommended theoretical approach—as I’ve argued—is therefore like this: do classical game theory, the kind originally developed by economists based on the assumption of canonically rational decisionmakers.
That is, hold constant the assumption that major US policies—crafted by the most powerful group of people in history—are both rational and well informed, and modify instead any other assumptions in your model of the world (about how the world works, or about boss goals, beliefs, values, alliances, constraints, etc.) until the erstwhile surprising policy choices of the bosses become rational.
Ok? Done with that, check to see whether your modified assumptions can be supported with the best logic and evidence. In particular, try to squeeze unexpected or special predictions from the modified assumptions and see if you can validate them with convincingly documented facts. Only after failing repeatedly in this endeavor, say I, should you consider abandoning the assumption of canonical rationality.
(In its mature application, a theorist applying this approach learns to pay close attention to the facts being described, and to the proper documentation of said facts, rather than to the interpretations imposed on such facts by various media vehicles and official and semi-official figures. Because there is always an implicit model of the world in such interpretations, and rather than adopt any such model of reality from some—quite possibly manipulative—authority, our aim is to build an independent model of reality as intellectually free and sovereign citizen scientists.)
My earlier piece contains what I hope is a careful defense of this, my preferred approach, so I won’t belabor that here. Instead, I will apply it.
I will generate two hypotheses to explain US escalations in Ukraine as rational policy. The first I will present here below; the second in my next piece.
For reasons having entirely to do with media and cultural conditioning, these hypotheses may be surprising, but what matters is whether they can give a better account of the evidence. Even if they cannot, and we decide to reject them (and we’ll have to reject—for sure—at least one of them), we will understand the world better for having considered them.
For that’s how scientific knowledge grows: we learn stuff by looking for evidence that will help us decide between competing hypotheses. Ergo, we cannot play this game of science unless competing hypotheses are allowed on the table. And we must allow, first, the most obvious hypotheses, such as those based on the assumption that the most powerful people in history are neither idiots nor madmen.
MAD and canonical rationality
As a preliminary step, I must briefly review the deservedly famous logic behind mutual assured destruction, or MAD.
The idea of mutual assured destruction (MAD) to render war obsolete was first floated in the 19th century. In the 20th, with the arrival of nuclear weapons and the development of game theory, a modified version of this idea acquired a solid analytical framework in the work of several scientists, including Nobel-laureate economist Thomas Schelling (author of The Strategy of Conflict and Arms and Influence).
The game theorists pondered the logic of nuclear deterrence. The chief worry was a possible all-out (or ‘terminal’) nuclear war that might destroy the US and the Soviet Union, and perhaps also—via ‘nuclear winter’—all urban life on Earth. So the central research question became: How to avoid all-out nuclear war?
The game theorists found, unnervingly, that it’s rational to launch a nuclear sneak attack to avoid becoming obliterated by your rival’s nuclear sneak attack; and since this logic applies to both rivals, nuclear war is a ‘Nash equilibrium’: it is rationally guaranteed. Unless, that is, the structure of MAD applies. With MAD, you may rationally expect the opposite: nuclear deterrence.
And when does MAD (mutual assured destruction) apply? When neither country can win a nuclear war—even with a sneak attack—because both countries would be mutually destroyed regardless of who strikes first. This condition erases, for either country, the incentive for a sneak attack.
But what ensures that MAD applies? Second-strike capability. This means you can strike back after suffering a nuclear first strike (because early warning systems alert you of an enemy attack long before their missiles hit you, and/or because your command centers and launch sites are built to survive a nuclear first strike).
If the US and USSR—or, these days, the US and Russia—both have second-strike capability (and apparently they do), then MAD applies. This deterrence equilibrium, however, holds only if both US and Russian bosses know that their counterparts possess second-strike capability. (If kept secret, second-strike capability has no deterrent value.)
The acronym MAD was coined by Donald Brennan, and dripping with conscious irony because it sounds crazy that mutual assured destruction is what keeps the world from being destroyed. (But who is the crazy one? Brennan was a strong advocate of building an effective missile defense that, if successful, would have undermined the deterrence equilibrium and made nuclear war more likely.)
Anyway, compounding the irony, MAD only works if all decisionmakers are strictly rational. Crucially, this requires that US and Russian bosses prefer to rule over an economically productive population rather than over a nuclear wasteland.
Building a model to compete with Tucker Carlson’s
We know that an all-out nuclear war between the US and Russia carries, for these countries, the risk of an infinite cost: the destruction of all civilized urban life in both the US and Russia (and, perhaps from the cascading effects of that, an infinite cost for all of planet Earth, too).
Lots to mourn. But from the perspective of the narcissistic megalomaniacs vying for world power the chief issue is that they would lose all power—and that ain’t their preferred outcome. Hence, if escalations against Russia in the Ukraine war carry a non-zero probability of an infinitely costly nuclear war, then it is irrational for them to choose these escalations.2
Current US policy to escalate the Ukraine war is therefore consistent with canonical rationality only if US bosses are confident—entirely confident, I should say—that, under the present (and ever increasing) provocations, the danger that Putin & Co. will launch a nuclear first strike is exactly zero.
Can an argument be defended that US bosses have good reasons to be entirely confident that the risk of all-out nuclear war from their escalations in Ukraine is exactly zero? Well, if no such argument can be defended, then Tucker Carlson will be right: current US escalation in the Ukraine war is “complete craziness.”
But I’ll make two attempts to ‘steel-man’—strengthen and defend—the claim that Biden & Co. are entirely confident that the risk of nuclear war is exactly zero.
My first attempt follows here…
Under our present assumptions, big bosses playing for world power are hyperrational. A corollary is that US bosses, being hyperrational, know themselves to be hyperrational. And they also know that their rivals—also big bosses equipped with powerful bureaucracies and playing for world power—are likewise hyperrational.
In other words, Putin & Co. understand—and US bosses know this—that pressing the nuclear button would turn their country to ashes and cost them all of their power, because US bosses, with nary a twitch, but rather with cold, grim, and unwavering determination, would then use their second-strike capability to erase Russia utterly from the face of the Earth.
But why would Putin & Co. be so damn certain this would be the outcome? Because US bosses are the biggest, baddest motherfuckers ever to walk the Earth.
That’s a technical term: it refers to the only bosses bad enough to use the deadliest weapon ever created, the nuclear bomb, against an enemy country, and—to boot—against a population of civilians. Who decides to drop nuclear weapons on two cities and incinerate the accountants, the nurses, the street sweepers, the restaurant goers, the students, the dads, the moms, the children, the babies…? Only the biggest, baddest motherfuckers ever. I dare say that’s… axiomatic.
So Putin & Co.—never mind their bluster—are profoundly afraid of US bosses, and they know that such consummate, trigger-happy, and powerful motherfuckers would not hesitate to launch a massive nuclear retaliation if attacked with nuclear weapons. Hell, US bosses might do that—assuming they could get away with it—if attacked with conventional weapons; the US bombs dropped on Hiroshima and Nagasaki, lest we forget, were a nuclear first strike. And US bosses, being hyperrational, know that Putin & Co. understand this.
That’s logic. But, in addition, US bosses are massively well informed, and must have multiple confirmations from high-quality, independent lines of evidence that the very strong, logical, rational expectation that Putin & Co. are scared to death of US bosses is indeed how it is with Putin & Co. Hence, US bosses know themselves to have what strategy theorists call escalation dominance: the strategic logic of MAD will hold—has to hold—even under extreme conventional-war provocations in the Ukraine war.
To sum up, there is exactly zero danger of this becoming an all-out nuclear war between the US and Russia because, no matter what happens, being rational, Putin & Co. will prefer any outcome to the total destruction of their country, which they are quite certain would follow from striking at the US with a nuclear weapon. Because US bosses know this, they can push their advantage with conventional escalation on Russia’s border.
Ok, so that’s one hypothesis.
The next step is to try and undermine it. Because in the social game of science we aim to build ever better hypotheses, and we can’t do that unless we find problems with the ones we currently have.
So, can my hypothesis be undermined? I think so, yes.
I will now proceed to do that—undermine my own hypothesis. I’ll need, however, a brief preparatory move to discuss the effects of human error on the logical structure of mutually assured destruction or MAD.
The effects of human error on MAD
Several theorists have pointed out that, though rational action must be the default interpretive framework, one cannot rule out the human element in nuclear deterrence, because, even if you try very hard to be perfectly rational, and even if you succeed, you will still, every so often, bump your head against the edge of the bathroom door (or something) when getting up to pee at night: to err is human.
This human element in nuclear deterrence—error—was considered in Peter George’s novel Red Alert. It was an interesting presentation, because, according to The Diplomat, “Peter George … was a former Royal Air Force Bomber Command flight officer with intimate knowledge of Bomber Command control systems, which apparently were similar to those of SAC.” (SAC are the initials for the US Strategic Air Command, the fleet of US bombers built to drop nuclear bombs over the Soviet Union should the imperative arise.) George’s ideas were then deliciously developed in Dr. Strangelove, a classic dark-humor tour de force directed by that Beethoven of film, Stanley Kubrick (screenplay by Kubrick, George, and Terry Southern).
In this masterpiece, considered one of the greatest (and funniest!) films of all time, US bosses, trying hard to be hyperrational, and fearing that American nuclear deterrence lacks credibility, devise ‘Plan R’ as “a sort of… retaliatory safeguard” that ensures solid second-strike capability for the US. As one General Turgidson explains to the US president:
“Plan R is an emergency war plan in which a lower-echelon commander may order nuclear retaliation after a sneak attack if the normal chain of command [from the president] has been disrupted. … The idea was to discourage the Russkies from any hope that they could knock out Washington—and yourself, Sir—as part of a general sneak attack and escape retaliation because of lack of proper command and control.”
Plan R had been devised to solve a perceived problem in nuclear deterrence: What if a Russian “sneak attack”—a surprise first strike—should kill the US president and his entire entourage, or at least leave them completely incommunicado? How then could a second strike happen if none authorized to launch it could give the order? This was a vulnerability, because so long as the Soviets believed they might escape retaliation for a nuclear first strike, they could be tempted to launch it.
Thus, for proper deterrence, the Soviets should know that even with a successful sneak attack they could not avoid an American retaliatory second strike. Plan R would “discourage the Russkies from any hope that they could knock out Washington … and escape retaliation,” thus preserving the logic of MAD and guaranteeing that a Russian sneak attack would never happen.
A brilliant idea—except for one detail. Operationally, Plan R required that an alternative command structure be created that would allow a “lower-echelon commander” to order a nuclear attack, and this opened the possibility for a rogue general to use this alternative command structure—without authorization from the US president—to launch an American first strike.
This is precisely the context, in Kubrick’s movie, for General Turgidson’s exposition, proffered to the US president in the awesome Situation Room: a rogue general has already launched an American nuclear first strike against Russia. Minutes remain.
“I admit the human element seems to have failed us here,” says Turgidson apologetically, by way of conceding the—now obvious—flaw in Plan R.
Compounding the emergency, the US bombers, already in flight, cannot be told to turn around, Turgidson further explains, because Plan R, once activated, requires bomber communication systems to reject all communication attempts. This is so that no clever Russian countersignals may sabotage the nuclear retaliation.
As it turns out, the Soviets, likewise rationally wishing to strengthen nuclear deterrence, have developed a solution of their own. But they have gone further, for they’ve sought to eliminate the possibility of human error in their second-strike capability by relying on artificial intelligence, as the dumbstruck US president learns from the Soviet ambassador, who’s been summoned to the Situation Room. The Soviets have a Doomsday Machine, the ambassador explains, designed to launch a second strike automatically if ever the US launches a nuclear first strike against the USSR.
But it’s Dr. Strangelove, a former Nazi mastermind and now a US government advisor (played to perfection by Peter Sellers), who explains to the US president (also played to perfection by Peter Sellers) the game-theoretic logic of this Doomsday Machine, which Strangelove can do because US planners had once considered building one.
A Doomsday Machine, Dr. Strangelove explains, is equipped with an automated, pre-programmed response that, in case of nuclear attack, will activate a massive nuclear retaliation as soon as the enemy’s first-strike is detected. Once running, the device cannot be deactivated, even by the bosses themselves (that’s the whole point of it).
A brilliant idea—except for one detail. The Soviets had not yet told the Americans about their Doomsday Machine’s existence. And that’s a problem, because “the whole point of the Doomsday Machine,” Dr. Strangelove cruelly smiles, “is lost if you keep it a secret!” Then, turning to the embarrassed Soviet ambassador: “WHY DIDN’T YOU TELL THE WORLD, HEY?”
Once again the human element. For if the Doomsday Machine’s existence had been known, the rogue American general who launched the first strike would never have believed that a sneak attack against the USSR could be carried out without massive nuclear retaliation. But the Soviets had been slow to make the existence of the Doomsday Machine public.
If any US bombers reach their targets in the Soviet Union, the United States will be destroyed. Despite all efforts, one does…
The human element!
Dr. Strangelove is at once very scary and very funny. And a great artistic achievement. Peter Sellers is at his best. The cinematography and dialogues are just brilliant. And I honestly think (it’s my personal opinion) that Stanley Kubrick here demonstrated that he was the greatest artist—ever—of his medium. But is the movie reasonable? Could anything like that really happen?
That depends on whether the premises of the film are ecologically valid. In other words, it depends on whether the specific policies adopted to ensure second-strike capability by the US and the USSR had structures comparable to those depicted, and whether they were as vulnerable to human error as laid out in the film.
Naturally, after the film’s premiere, US government officials, civilian and military, hotly denied that anything like that could ever happen. But it was later documented that Dr. Strangelove had been “closer to reality than even most government officials would have known at the time.”
It follows, from all this, that we must take the ever-present human element—error—seriously.
And so, assuming that, in 2023, the US and Russia really are enemies, as the official narrative claims, and considering that human error must always exist, is it really possible to believe that US escalations in the Ukraine war carry a risk of all-out nuclear war equal to zero?
The inadvertent escalation theorists would answer: ‘no way.’
Inadvertent escalation: to err is human
I don’t know whether Peter George’s Red Alert was the kick in the ass that got strategic theorists to start thinking hard about the human element. But start they did. Within such efforts, there is one specialized literature that, building on original work by Barry Posen, director of MIT’s Security Studies Program, has discussed the dangers of inadvertent escalation. These dangers are especially relevant here.
What is inadvertent escalation?
Consider an example. Suppose US spies are sent over to gather information on Russian command and control systems, especially their early-warning capabilities. And suppose the Russians find the spies and incorrectly—that is, they make an error—interpret all that as an effort to neutralize Russian early-warning systems. And suppose the Russians wonder to themselves: How much has the US learned? What if they can already neutralize our second-strike capability? And suppose that spooks them enough to launch a preemptive first strike against the United States, believing that their chances in a nuclear exchange are in any case better if they strike first…
The concept of inadvertent escalation was invented to cover cases like this, where something unintended happens that, through error, yields escalation leading to nuclear conflict.
The famous ‘security dilemma’ that Robert Jervis explored plays an important role here: actions taken by State A meant merely to improve defense will often inadvertently decrease the security of State B and will thus be perceived by B as offensive moves, causing inadvertent escalation. It is difficult, in fact, to think of examples where improved defenses do not have this implication.
Take, for example, missile defense. Oh sure, you just built your missile-defense system to take down incoming missiles—it’s entirely defensive; why should anybody see that as an aggressive move? Except that an effective missile-defense system means that you can now attack with relative impunity, so its development may be interpreted by your rival (and not necessarily unreasonably, don’t play the innocent) as a preparatory move to launching your attack.
On purely rational considerations, effective missile defense from nuclear missiles ruins MAD, because it tempts whoever first develops an effective missile defense to launch a nuclear first strike before the missile-defense gap is closed. That’s why, according to some, Ronald Reagan’s proposed nuclear-missile-defense system, dubbed ‘Star Wars,’ spooked the Soviets and forced them to negotiate arsenal reductions and mutual inspections, which eliminated the political support for missile defense and kept the structure of MAD. (That’s one way to spin positively what Donald Brennan, advising Ronald Reagan, was doing, and that argument has been made.)
The ‘trapped lion’
Now, in the literature on inadvertent escalation it is considered obvious that error is more likely when decisionmakers are scared. If those running a country consider themselves to be facing an existential threat, they’ll be scared.
Ronald Reagan & Co. didn’t have to worry about that when they allied with Osama bin Laden to supply weapons to the Afghan mujahideen (now known as the Taliban) fighting the Soviet-supported Afghan communist government. Why not? Because the mujahideen were hiding in caves and launching US-made stingers from their shoulders. Failure to subdue them, though certainly a defeat of sorts, didn’t place the Soviet Union in mortal risk to its very existence.
By contrast, Putin has always said (this is many years old) that NATO on his border would be perceived as an existential threat. And he emphasized that Russia would do anything and everything to ensure its survival, loudly reminding everyone that he had nuclear weapons. The most important line in the sand, the line that could not be crossed, was that Ukraine, on Russia’s border, should not join NATO.
Yet now this line has been crossed. Forget about which papers have or have not been signed; NATO is fighting on Ukraine’s side, so Ukraine, in the only way that counts, has indeed joined NATO. And that—Ukraine joining NATO (even without a hot conventional war against Ukraine/NATO that has fought him to a standstill)—is what Putin defined as an existential threat.
So, accepting that narrative, Putin is a trapped lion. He’s scared. And if there is any risk of nuclear war, you absolutely don’t want to put your rival in that position, said Thomas Schelling:
“We have recognized that the [deterrent] efficacy of the threat [of massive nuclear retaliation] may depend on what alternatives are available to the potential enemy, who, if he is not to react like a trapped lion, must be left some tolerable recourse.”3
Might not Putin, the trapped lion fighting a hot conventional war on his border with a rival nuclear power, interpreting his situation as an existential threat, easily misperceive, in his rage and fear, all sorts of things? And might not such errors yield inadvertent escalation to nuclear warfare?
This scenario, a hot conventional war between nuclear powers, was precisely Barry Posen’s chief concern when he began theorizing about inadvertent escalation. Accordingly, in the context of the Ukraine war, Posen has expressed that “Great power war … seems a great deal less unlikely than it did a few months ago.”
Just a moment—we already have a Great power war! But Posen no doubt means a direct war between the US and Russia—one where Russia might retaliate against US assets or even US territory, which then carries a much higher risk of escalation to nuclear war. That should be very scary. And yet, according to Posen, Biden & Co. simply won’t consider a “diplomatic strategy” to negotiate a ceasefire:
“If it wanted to, the United States could develop a diplomatic strategy to reduce maximalist thinking in both Ukraine and Russia. But to date, it has shown little interest in using its leverage to even try to coax the two sides to the negotiating table. Those of us in the West who recommend such a diplomatic effort are regularly shouted down.”
Hm. It seems that US bosses must know that the risk of all-out nuclear war cannot be exactly zero when they escalate against Russia on her border. But they are insisting on that. So are they irrational, then? Is Tucker Carlson right? Is this “complete craziness”?
Not so fast.
Yes, my attempt to defend above that MAD will hold no matter what, and therefore that Biden & Co. are not crazy because they know the risk of all-out nuclear war to be exactly zero, has failed.
But I have something else up my sleeve. I think there is another way in which Biden & Co. might be completely certain that the risk of nuclear war is exactly zero. We just need to modify a different assumption in our model of the world. I’ll explain that in my next piece.
Stockholm International Peace Research Institute, S. I., S. I. P. R. I. (2022). SIPRI Yearbook 2022: Armaments, Disarmament and International Security. Reino Unido: Oxford University Press. (p.15)
https://archive.org/details/sipri-yearbook-2022-armaments-disarmament-and-international-security-summary/mode/2up
Technically speaking, the ‘expected utility’ for the bosses calling the shots is negative if the term containing the infinite cost is multiplied by a non-zero probability, You can’t make the probability of nuclear war small enough that multiplying it by minus infinity will not give you minus infinity again, so the expected utility of such escalations, no matter what is in the other terms, is going to be negative if the probability of nuclear war is not exactly zero. By contrast, negotiating a ceasefire, for example, easily yields a positive expected utility.
Schelling, T. C. (1980). The strategy of conflict. Spain: Harvard University Press. (p.6)
This would confirm that if there is ANY chance of achieving peace in this world, you have to develope a high amount of bad-motherfuckery. Dismal, but not clearly untrue. Specially if you are right and these bosses are actually aiming at "major pruning" for our species at this current time and situation.