How can we assess risks with limited historical precedents, such as nuclear war? With financial support from the Global Challenges Foundation, Seth Baum developed a model that relies on systematic analysis of cause and consequence, taking into consideration near-misses and other incidents. This method helps identify effective mitigation policy, and can be applied to the study of other global catastrophic risks. Julien Leyre, Global Challenges Foundation, digs deeper in an interview with Seth Baum.
Julien Leyre: Seth, nuclear war risk has been a key point of focus for your research at the Global Catastrophic Risk Institute. Why is it important to do this kind of research?
Seth Baum: There are two reasons. First, nuclear war is an important risk in its own right. It was the first human-made global catastrophic risk, and has been a major one ever since. A staggering 15,350 weapons still remain, of which 14,300 are held by the US and Russia. Right now, 4,000 of these weapons are in active deployment, meaning that they are available for use at any time. A nuclear war could be just moments away. Yet despite the topic’s importance, there has been little risk analysis of nuclear war.
The second reason is that studying the risk of nuclear war helps us understand and address some challenges shared by the study of other global catastrophic risks. For example, no massive nuclear war has ever occurred, but there was one small nuclear exchange in World War II and, several times since then, nuclear war almost occurred. How do you use such limited historical data to estimate the probability of a future nuclear war? This challenge is shared by other global catastrophic risks.
Julien Leyre: What does the study of nuclear war risk look like? What’s the first step when you conduct risk assessment on this scale?
Seth Baum: The starting point is to recognize that the risk of nuclear war has three different components: the probability of a nuclear war occurring, the specifics of what happens during the war, and impacts after the war. These three parts are interrelated, but each needs a distinct type of analysis. Our research thus far has focused on the first and third, but all three are important.
To assess the probability of occurrence, we model the various pathways through which nuclear war could occur. For example, it could be that conventional war escalates, as in World War II, or that a crisis erupts into a nuclear war, as almost happened during the Cuban missile crisis. For each of those pathways, we model the sequence of steps – the chain of successive events that take us from a calm condition to nuclear war.
As for impacts, we model the various ways that the detonation of nuclear weapons can affect different aspects of society and the environment. This involves a lot of systems analysis since there are so many things that nuclear detonations can impact. The most obvious are direct effects like buildings collapsing or burning and people near the detonation getting hurt and killed. But there are also many indirect effects like nuclear winter and systemic effects across the global economy, and these are really important.
“Which policies are most effective at reducing the risk of nuclear war? A good risk model can go a long way towards figuring this out.”
Julien Leyre: As you mentioned, nuclear bombs have only been used once, at the end of the Second World War. With so little historical data, how can you develop robust risk assessment?
Seth Baum: It’s not easy! Traditional risk analysis is based mainly on historical data, but this does not work for nuclear war. The traditional approach would say there has been one nuclear war in about seventy years of nuclear weapons being around, therefore there is a one-in-seventy chance of nuclear war happening in any given year. But the conditions in 1945 were very different from the conditions in 2016, so it’s not a fair comparison.
While there has only been one nuclear war, there have been many near-misses: incidents that went partway to nuclear war. They range from the Korean War in 1950-1951, when the U.S. considered using nuclear weapons against Chinese forces, to recent moments in the Ukrainian Civil War, in which Russia has made several nuclear threats. We combine data on near-misses with our models of the pathways to help quantify the probability. However, even this doesn’t allow us to calculate probabilities as precisely as we would for other risks. Therefore, an important part of nuclear war risk analysis is acknowledging inherent uncertainty and thinking intelligently about what to do in spite of everything we don’t know.
Julien Leyre: When all this analysis has been done, how can you apply it? How does this risk modeling work help determine the right action to reduce risk?
Seth Baum: That’s a good question. Ultimately, the important part is not the risk itself, but what people can do to reduce the risk. Studying nuclear war risk is an interesting intellectual exercise, but the real reason to do it is that major policy questions depend on it.
Perhaps the simplest question is, how high should we place nuclear war on the agenda? Attention is a scarce resource, especially for policy-makers, who could be working on so many different issues at any given time. One conclusion that I see from our risk analysis is that nuclear war should be higher on the agenda than nuclear terrorism. The probability of nuclear terrorism may be somewhat larger, but the severity of a nuclear war can be much, much larger.
Another important question is, which policies are most effective at reducing the risk of nuclear war? A good risk model can go a long way towards figuring this out. Indeed, this is a core benefit of a good risk model. Risk reduction isn’t the only factor for evaluating policies – for example, some policies are more expensive, or require more political capital – but risk reduction is undoubtedly important.
Limitations in our current risk models mean that we can only apply it to certain policies. Improving the models so that we can apply it further is a big research priority. Meanwhile, they can still help in other ways. For example, the models show that nuclear war and nuclear terrorism are not completely separate issues. One scenario has a nuclear terrorist attack triggering a nuclear war between countries. So reducing the risk of nuclear terrorism also reduces the risk of nuclear war. Seeing these sorts of policy insights across the full range of nuclear war scenarios and impacts is another benefit of this type of risk analysis.
Julien Leyre: What would it take to integrate your model into a more generalised risk mitigation framework? Could we, for instance, quantify nuclear war risk and other global catastrophic risks? What would it take to get there?
Seth Baum: It would take a lot of research! Some global catastrophic risks are relatively well quantified, especially asteroid collisions and volcanic eruptions. But even those have important missing pieces, especially regarding how impacts cascade across the global economy. Modeling for that is actually quite similar to modeling nuclear war impacts. These synergies are a reason to study various risks together.
The other big piece of a general global catastrophic risk mitigation framework is interaction between the different risks. For example, our nuclear war impacts model includes links to several other global catastrophic risks. Nuclear war can increase pandemics risk by destroying public health infrastructure. It can increase climate change risk by impeding renewable energy, though it can (rather morbidly) also decrease climate change risk by killing off a lot of people so they don’t emit greenhouse gasses anymore. It can also cause the failure of a risky environmental technology called geoengineering. In principle, our nuclear war impacts model should include full models of these other risks. We’re not there yet, but it’s an exciting research direction.
During the Cuban missile crisis, in October 1962, the United States targeted a Soviet submarine that carried nuclear weapons. Two of the three Soviet officers wanted to launch nuclear weapons in response, but the procedures required agreement between all three serving officers. The third officer, Vasili Arkhipov, refused, potentially averting nuclear war.
In September 1983, a Soviet early warning satellite detected five land-based missiles from the United States directed at the Soviet Union. The officer on duty, Stanislav Petrov, had only minutes to decide whether or not this was a false alarm. Procedure would have required him to alert his superiors but, on gut instinct, he reported the incident as a false alarm. Later investigations revealed the satellite had mistaken reflections of the sun on the top of clouds for nuclear rockets.
On January 25, 1995, Russian radar detected a scientific weather rocket over the northern coast of Norway. Operators suspected it was a nuclear missile. President Yeltsin reportedly faced the decision to launch nuclear weapons in retaliation. He decided not to, guessing – correctly – that the rocket was not an actual attack.