Extraordinary events—events with no obvious precedent—are potentially transformative but difficult to forecast. Because forecasting extraordinary events requires more judgment than forecasting routine events, we shouldn’t have much confidence in our forecasts of the extraordinary. This essay is cross-posted on Metaculus Journal and is based on a talk I recently gave on Metaculus for Forecast Friday.
The hardest, most interesting forecasting questions are about things that haven’t happened before. Many important events—wars, famines, pandemics, natural disasters—are relatively predictable. These events have occurred regularly throughout history, so we can say with some confidence how often they occur in a range of different circumstances. They may be extraordinary events in the sense that they disrupt our regular day-to-day lives or even cause empires to rise and fall, but in a broader, historical sense they’re routine. The harder, more interesting questions—the kinds of questions futurists think about—are about things that are unprecedented in some important way. When an event has never happened before, it’s hard to know both how likely it is and what effects it’s likely to have. These extraordinary events can alter the course of history by disrupting established patterns in a way that routine events rarely do.
Every novel event is effectively a real-life experiment with uncertain effects. Before the first nuclear weapon test in 1945 the bomb’s designers realized the fireball created by the explosion would heat the Earth’s atmosphere to temperatures previously unprecedented in the planet’s history. There were reasonable concerns—in the absence of historical evidence for what happens when the atmosphere gets that hot—that the blast would set off a runaway reaction in atmospheric nitrogen. It wasn’t crazy to think the first nuclear weapon test might essentially set the planet on fire. The bomb’s designers concluded—correctly, as it turns out—that such a reaction wouldn’t be self-sustaining because most of the energy it produced would radiate away. But they acknowledged there was nevertheless a “distinct possibility that some other less simple mode of burning may maintain itself in the atmosphere.”1
Nuclear scientists actually did fail to anticipate one of the reactions produced in the Castle Bravo test of a new thermonuclear bomb design in 1954. Their calculations assumed that lithium-6 would be the only isotope of lithium to contribute to the explosion. But they didn’t realize that lithium-7 would also undergo fission when bombarded with highly energetic neutrons. The bomb ended up having 2.5 times the yield they expected. The explosion was so large that it rendered inoperable or destroyed most of the instruments in place to collect data. The fallout from the blast combined with shifting winds to expose the population of the Marshall Islands to high levels of radiation, making the Castle Bravo test one of the worst radiological disasters in history.
We now know from numerous nuclear weapons tests that they don’t produce self-sustaining nuclear reactions in the atmosphere. But there’s still no historical precedent for an exchange of nuclear weapons between two states. Because nuclear weapons are so different in design and effect from conventional weapons, it’s reasonable to think a nuclear war might transform or even destroy human civilization. There’s likewise no precedent for the development of superintelligent AI. Because superintelligent AI would be so different in design and effect from even the most sophisticated AI in use today it’s reasonable to think it also might transform or even lead to the destruction of human civilization. The fact that we haven’t yet had a nuclear war or developed superintelligent AI doesn’t necessarily mean either is impossible or even particularly unlikely. Because we’re in uncharted territory, it’s hard to know.
Our future hinges on extraordinary events. Routine events happen regularly without generally disrupting established patterns. But extraordinary events like a nuclear war or like the development of superintelligence could transform the world in hard-to-foresee ways. The fact they seem to us unprecedented means we already suspect they could disrupt the routine course of events. Any sufficiently long-range forecast is effectively a forecast of extraordinary events like this—of the likelihood of nuclear war and the development of superintelligence and so on—because what life will be like a generation from now will likely depend on plausible wildcards like these.
The most reliable way of forecasting something is generally to consider how frequently it has happened in similar circumstances, because our intuition about the likelihood of singular events tends to be poor. With enough data—with a large record of similar cases—you don’t even need to know why a thing happens. How likely it is essentially becomes an empirical question, because you can observe how often it actually does happen. But because there’s no precedent for extraordinary events there’s no obvious reference class of similar events for forecasters to draw on, which means forecasting them requires more judgment than forecasting routine events does.
Sometimes, of course, we don’t need a precedent for something to know it’s about to happen. When I see a car coming, I can make a prediction about whether it’s going to hit me without giving much thought to how often people get hit by cars. When we forecast extraordinary events, we can use that same ability to project chains of events forward through time. Because the range of possible outcomes can grow quickly—and because we may always be missing something—it’s hard to project chains of events far into the future with much confidence. But in the absence of clear precedents for whatever it is we’re trying to forecast, this may be the best we can do.
When there’s no precedent for an event, we can also compare it to similar but more routine events. Nuclear wars are likely to be different from conventional wars in important ways, but they are likely to escalate similarly. Superintelligent AI is likely to be different from previous technological innovations in important ways, but the comparison can still be instructive. When we compare an event to others that are less clearly similar, the comparison becomes less useful in estimating a base probability. However, in the absence of clear precedents, this approach is preferable to making no comparison at all.
Forecasting extraordinary events is fundamentally speculative. When we don’t have a long record of similar events we can observe, we’re left to rely on our personal judgment about what matters and how the world works. In the absence of better empirical data, our forecasts are likely to be strongly influenced by our presuppositions and biases. That means we need to work even harder than we otherwise would to question our assumptions and analyze the situation from a wide variety of different perspectives. Most of all, we need to recognize the limits of our ability to forecast extraordinary events: any realistic forecast of the extraordinary should be highly uncertain.
My fellow forecasters—I can’t take any credit for their forecast since I didn't participate in it—were among the few to say Erdoğan was the favorite to win reelection in Turkey’s presidential election. If you’re interested in understanding the situation in Turkey, I highly recommend 's first-rate analysis. If you found this valuable, the best way to support my work right now is by sharing it with others!