In the days when the Soviet Union stretched across 11 time zones and the Strategic Defense Initiative wasn’t yet a twinkle in the eye of a just-elected President Ronald Reagan, a political scientist named Kenneth Waltz provoked nuclear hawks and doves alike by publishing an article called “The Spread of Nuclear Weapons: More May Be Better.”
Waltz, a founder of the so-called “structural realism” or “neorealist” school in international-relations theory, argued that if peace is defined as “the absence of general war among the major states,” an unprecedented era of peace had prevailed since the Hiroshima and Nagasaki bombings in 1945. It would be nice, he continued, if nations possessed only conventional weapons and never fought. But given that they do come into conflict and that “ten or twelve or eighteen nuclear-weapon states” would probably exist someday—there were seven in 1981, when he wrote, and now there are nine—“the gradual spread of nuclear weapons is better than no spread and better than rapid spread.”
By favoring “gradual” spread, Waltz stopped his argument short of its reductio ad absurdum, which would be that because nuclear states have strong incentives to avoid wars with each other, the world automatically becomes that much safer whenever another nation acquires thermonuclear weapons. And by maintaining that a gradual spread was better than none, he avoided the logical inconsistency of conventional deterrence theorists who believe that proliferation should be prevented.
Today Waltz and other neorealists continue to argue that states would do whatever served their self-interests but for those constraints imposed by the international balance of power. When nuclear weapons enter the picture, the neorealists contend, the costs of waging war exceed the bearable (or even the survivable), making a balance of power based on nuclear deterrence inherently and uniquely stable.
Whether one thinks that Waltz’s argument is crazy or makes sense, advancing technology is creating increasingly propitious conditions for it to be tested. Outside Wilmington, North Carolina, for example, is an unexceptional building that in 2012 or 2013 will probably become the world’s first commercial plant for uranium enrichment by LIS, or laser isotope separation. LIS at the proposed facility promises to produce reactor-grade uranium, in which the concentration of fissile uranium-235 has been increased from its natural levels to as much as 8 percent, at radically lower cost and with less waste than the current techniques based on diffusion or centrifuge technologies. Charles D. Ferguson, president of the Federation of American Scientists, notes that if this particular process works as advertised, “not only will LIS be a far more efficient method, it’ll also be far more difficult for outsiders to detect.”
Nowadays, when experts like Ferguson are asked what the surest route would be for nations to covertly produce weapons-grade fissile material—usually defined as highly enriched uranium with a U-235 content of at least 90 percent—they point to LIS. The technology someday could even be within reach of actors that are not nation-states. Potentially requiring only a midsize warehouse and drawing no more electricity than a dozen suburban homes, an LIS plant might operate unnoticed almost anywhere. The Lashkar Ab’ad laboratory, 40 kilometers west of Tehran, went undetected as Iran’s pilot LIS site from 2000 until 2003, when Iranian dissidents revealed its existence to the International Atomic Energy Agency. The IAEA’s investigators subsequently concluded that highly enriched uranium could have been produced at Lashkar Ab’ad if all the planned equipment had been installed. Today, according to the dissidents, the Tehran regime’s LIS research continues elsewhere; not incidentally, Fereydoon Abbasi Davani, a survivor of the recent spate of car bombings targeting Iranian scientists allegedly involved in Tehran’s bomb program, is an expert on the technology.
LIS is just the front end of the trend. In the United States, after canceling plans to use Yucca Mountain as the national nuclear waste repository, the Obama administration established its Blue Ribbon Commission on America’s Nuclear Future. Per Peterson, the only nuclear engineer on that commission, has long proposed that the United States do what other nuclear nations do: recycle its spent nuclear fuel. Various reactor designs could make it possible within two to three decades for waste to be burned down to the point of harmlessness. The problem is that such advances would also make it quicker and easier to produce the material needed for nuclear weapons. That could put us at the dawn of the golden age of nuclear arms proliferation.
Failures of Deterrence
The Global Zero movement, whose membership includes such former heads of state as Mikhail Gorbachev and Jimmy Carter, thinks a world with nukes in the hands of, say, Myanmar and Syria (to name two regimes that might aspire to nuclear status) would be a far more unstable place. It wants all nuclear arms banned by 2030. So do former U.S. secretaries of state Henry Kissinger and George Schultz, former U.S. secretary of defense William Perry, and former U.S. senator Sam Nunn, who, together, kick-started the Global Zero movement in 2007 by proposing a phased elimination of the world’s nuclear arsenals. As someone responsible for the decision to put multiple warheads on U.S. ICBMs in the early 1970s, Kissinger is an interesting apostate against the doctrine of nuclear deterrence. Still, he’s long been mindful of the possibility that deterrence is ineffective or unnecessary. As he wrote in Diplomacy, his 1994 magnum opus, “Deterrence can only be tested negatively, by events that do not take place, and … it is never possible to demonstrate why something has not occurred … or whether the adversary ever intended to attack in the first place.”
Now, one would expect the most destructive of all weapons to have some deterrent capability, and on that count the historical record is persuasive. Between 1940 and 1996, the United States built more than 70,000 fission and fusion bombs. The USSR amassed a similar arsenal. And during the almost 50 years that the Cold War lasted, American and Soviet leaderships always arrived—sometimes even as they stated the opposite—at the conclusions that Bernard Brodie, the first American nuclear strategist, reached immediately after Hiroshima and Nagasaki. “Thus far the chief purpose of our military establishment has been to win wars,” Brodie wrote in 1946. “From now on its chief purpose must be to avert them.”
As Waltz notes, general war among the major states was indeed averted. But Thomas Schelling, one of the principal intellectual architects of U.S. Cold War strategy, argues that this doesn’t mean deterrence worked very well. “Since 1945, at least seven or eight wars have occurred, depending on how you count, where one side had nuclear weapons and didn’t use them,” Schelling says. “Nuclear weapons didn’t deter North Korea and China in the 1950s. In 1973, Israel had nuclear weapons it could have delivered against Cairo and Damascus.” In his prize-winning essay “The Myth of Nuclear Deterrence” (2008), Ward Wilson, a senior fellow at the James Martin Center for Nonproliferation Studies, in Monterey, California, also cites wars where one side possessed nuclear weapons that failed to deter the other side from aggression. He concludes that “the practical record of nuclear deterrence shows more obvious failures than obvious successes.”
Says Jacek Kugler, a professor of world politics at Claremont Graduate University who studied under Brodie and is now a consultant to organizations like the Pentagon and the World Bank, “The critical assumption in Brodie’s original model, and in almost every single model of deterrence used today by American policymakers, is that if I simply increase my opponent’s cost, I decrease the probability of war.” Kugler begs to differ: “To start with, people don’t go to war because of the cost. What they calculate is the possibility of gain. So in the 1970s some of us started saying that the conventional theory is nonsense.” Kugler believes that dissatisfied or angry challengers could risk a nuclear action—whether a dirty bomb, a limited nuclear attack, or an all-out nuclear exchange—if they believed the conditions were strategically favorable.
Moreover, as Winston Churchill noted when explaining nuclear deterrence to Parliament in 1955, it will never “cover the case of lunatics or dictators in the mood of Hitler when he found himself in his final dugout.” A regime facing its own demise has passed beyond worrying about risk. It’s perfectly credible that such a regime will employ nuclear weapons, especially if, as North Korea does, it stands at a disadvantage in conventional military terms.
Game Theory
The dangers of this dynamic were part of what Schelling had in mind during his Nobel lecture in 2005, when he won the prize in economics for enhancing “our understanding of conflict and coöperation through game-theory analysis.” In his lecture, Schelling examined how, over the six decades since Hiroshima and Nagasaki, a nuclear “taboo” had effectively been constructed to prevent such terrible weapons from being used again. Great diplomatic skill and international coöperation would be necessary to maintain that taboo in a world where, Schelling told his audience, America and other major powers were very likely to experience “what it is like to be the deterred one, not the one doing the deterring.”
Such a world, where smaller states acquire nuclear weapons to deter the overwhelming conventional might of the U.S., is not quite the future that the Pentagon’s nuclear strategists envisioned after the Cold War. In 1995, the United States Strategic Command (Stratcom) produced a document called “Essentials of Post–Cold War Deterrence,” which would be largely restated in George W. Bush’s 2001 Nuclear Posture Review. “Since we believe it is impossible to ‘uninvent’ nuclear weapons,” the Stratcom text declared, they “seem destined to be the centerpiece of U.S. strategic deterrence for the foreseeable future.” Before the first Gulf War, the authors noted, President George H. W. Bush had apprised Saddam Hussein that the U.S. would not “tolerate the use of chemical or biological weapons.” Though this merely hinted at nuclear retaliation, Iraq didn’t (according to the U.S. government) field biochemical weapons in the 1991 conflict. The lesson, they concluded, was that in the post–Cold War era, nukes not only should remain central to U.S. strategy but could become part of “policy enforcement”—in effect, a threat to ensure that when the country chose to fight, it would do so on favorable terms. Adversaries should understand that “our actions would have terrible consequences for them,” but the U.S. “should not be very specific.”
This was singing straight from the Cold War hymn sheet. During the 1950s, U.S. secretary of state John Foster Dulles had argued that it was rational to “remove the taboo” from nuclear weapons so as to intimidate an adversary into concessions; in the late 1960s, Richard Nixon had explained that enemies should be led to believe that “Nixon is obsessed … We can’t restrain him when he’s angry and he has his hand on the nuclear button.” These statesmen had echoed theorists like Schelling and Herman Kahn at the RAND Corporation: Schelling, for instance, had noted that in bargaining, uncertain retaliation is more credible and more efficient than certain retaliation. What had changed by 1995 was that the USSR was no longer a limiting counterforce in international affairs. In the new circumstances, the Stratcom authors believed, the deterrent threat of U.S. nuclear weapons could exert even greater sway.
But these planners now seem naïve: a strategy centered on nuclear deterrence has proved worthless against the actual challenges America has confronted. Most notably, on September 11, 2001, its nuclear arsenal had no effect on al Qaeda’s calculations.
Furthermore, almost all those Cold War strategists whose ideas were parroted by “Essentials of Post–Cold War Deterrence”—particularly those who approached nuclear deterrence via game theory—would probably have told Stratcom it wouldn’t work.
Cold War theories about deterrence were based in part on how two players would behave in any zero-sum game (if one player wins, the other loses), with each player seeking an optimal way to minimize his maximum loss. But in games with more than two players, strategic complexity grows exponentially as the number of players increases. In a multiplayer game of nuclear deterrence, says game theorist Martin Shubik, an economics professor emeritus at Yale, this means escalating instability—even if all actors are assumed to be rational, which does not hold in the age of suicide bombers.
“My main conclusion is that the United States would be strongly advised to call for a global group to supervise all nuclear states and should be the first to open its own facilities so as to get the ball rolling for a worldwide inspection program,” Shubik says. “Without something like that, the odds of avoiding a nuclear war in the next 20 years are very low.”
Mark Williams is a contributing editor at TR. He last reviewed Stewart Brand’s Whole Earth Discipline in May/June 2010.