It is possible to predict the future. Research on superforecasters shows that forecasting is a skill that can be learned. While some people probably have more natural aptitude for forecasting than others, individuals and institutions can improve their forecasting ability through training, practice, and organization.
This is the second issue of my new newsletter, Telling the Future. I’m still not ready to launch after spending the last week doomscrolling—a week in which the leader of a nuclear superpower warned of a potential “bloodbath”—but I plan to send out issues on a regular basis soon.
We can predict the future, although not with complete certainty or accuracy. Our world is full of predictable regularities. We rely on our predictions—implicit and explicit—to make all manner of decisions, from whether to go to the park on a particular day, to whether to get married, to whether to invest in clean energy infrastructure. If we couldn’t anticipate the future at all, we would hardly be able to function.
Many future events that interest us are so close to certain that we don’t even think of them as questions. Other events are so nearly random we don’t put much effort into trying to predict them. But in between those extremes is a category of events that are difficult to foresee with certainty but nevertheless foreseeable. This is where forecasting matters.
Political, economic, and social events generally fall into the category of somewhat foreseeable things. Human society isn’t completely chaotic, but nor is it a simple mechanical system that obeys clear, fixed rules. Small shifts in conditions can lead to dramatic shifts in the trajectory of events. As a result, there don’t seem to be any simple algorithms that reliably predict human behavior. But if we can anticipate these events more reliably—if we can lower the uncertainty around them—we can make better decisions.
Forecasting is a skill. In a series of tournaments—some of which I participated in—the US Intelligence Advanced Research Projects Activity (IARPA) studied geopolitical forecasting methods. These tournaments asked participants questions like the one I posed in my first newsletter (“Will Russia invade Ukraine before April 1?”). Their answers were evaluated using “Brier scores,” a tool that was originally developed to evaluate weather forecasts—Brier scores essentially measure the correlation between what was predicted and what actually occurred.1 What these tournaments showed was that some participants—so-called “superforecasters”—were consistently able to estimate the probability of geopolitical events both more accurately and further in advance than other groups or forecasting methods. As a group, superforecasters beat trained intelligence analysts by 25-30%.
The research showed that forecasting is an intellectual skill. One estimate found that superforecasters perform at least a standard deviation better than the general population on measures of fluid intelligence—roughly, the ability to reason about novel problems.2 They do well both on Raven’s Progressive Matrices, which test the ability to recognize and extrapolate spatial patterns, and on numeracy tests that measure the ability to reason about probability and mathematical concepts. Superforecasters also perform well on measures of crystalized intelligence—roughly, knowledge of particular facts and ideas. In particular—and unsurprisingly—the best geopolitical forecasters knew a lot about politics.3
Not every smart, knowledgeable person turns out to be a good forecaster. When Philip Tetlock originally studied expertise, he found that on average expert forecasts were more-or-less worthless. They might as well have been guessing randomly. The experts who were good at forecasting were distinguished by their cognitive style. Borrowing a framework from the philosopher Isaiah Berlin,4 Tetlock found that “foxes” who consider different points of view in their forecasts were consistently more accurate than “hedgehogs” who relate everything to a single set of preconceived ideas.5
Later research on the IARPA tournaments produced similar results. The best forecasters score high on measures of “actively open-minded thinking”, which is essentially the willingness to seriously consider alternate points of view.6 More recent research shows that the better forecasters draw on a broader range of historical comparisons and competing perspectives in their forecasting rationales. They are more likely to qualify their rationales with words like “but” or “however” than they are to reinforce them with phrases like “in addition”. In general, they seem to have a higher tolerance for “cognitive dissonance”; they’re able to consider multiple conflicting views at the same time.7
Forecasting is also a skill that can be developed and improved. Some people may have a natural facility for forecasting, but it’s also something people and institutions can learn to do better. Researchers found that forecasters who were given basic training in probabilistic reasoning or who deliberately practiced their forecasting were able to improve their Brier scores.8 Group forecasts can be improved by putting forecasters together on teams, while statistical algorithms can improve the accuracy of their aggregate forecasts.9
Telling the Future is about forecasting the future more accurately. The practice of forecasting has fundamentally changed the way I think about the world and about the future. This newsletter is in part an excuse for me to continue to work on doing something I love. It’s also a way for me to share what I’ve learned with you. I hope you will find it as interesting and valuable as I do.
Ukraine forecast update: Last week I wrote there was a 65% chance Russia would invade Ukraine. On Monday, Russia did invade by moving ground forces into Ukraine, although right now Russian forces have not gone outside the territory currently controlled by separatist forces. I think I was right to note last week that there wasn’t much room for a negotiated agreement between the various parties. My initial estimate of the chance of a Russian invasion was relatively high when I made it, although arguably it should have been even higher. But I don’t think my forecast should have been too much higher, because 1) what was going to happen depended a lot on a single person’s decision; 2) Russia was deliberately obscuring its intentions; and 3) major wars in Europe are rare.
Last week I also wrote there was a 25% chance Russia would occupy territory or cities outside of Eastern Ukraine. That chance has probably gone up now. It’s possible that the goal of Putin’s fiery rhetoric is to give him cover for not acting even more aggressively, but his speech certainly seemed like a justification for a broader attack on Ukraine. It also seems unlikely to me that Russia has massed one of the largest military forces in world history just to advance into territory it already effectively controls. But I still think—with low confidence—that there’s just a 30% chance Russia will try to hold territory or cities outside of Eastern Ukraine, because the cost of a full-scale invasion of Ukraine seems likely to be higher than its benefits to Russia.
Philip Tetlock recently discussed forecasting research with Julia Galef on the Ezra Klein Show here. Clay Graubard’s excellent collection of resources for forecasting the Ukraine crisis are here. Tom Chivers wrote about superforecasters’ predictions for the Ukraine crisis here. Superforecaster Balkan Devlen’s analysis of what will happen next is here.
Glenn W. Brier, “Verification of Forecasts Expressed in Terms of Probability,” Monthly Weather Review 78, no. 1 (January 1950): 1-3.
Barbara Mellers et al., “Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions,” Perspectives on Psychological Science 10, no. 3 (2015): 267-281.
Barbara Mellers et al., “The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics",” Journal of Experimental Psychology: Applied 21, no. 1 (2015): 1-14.
Isaiah Berlin, “The Hedgehog and the Fox” (1953).
Philip Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton University Press, 2017).
Barbara Mellers et al., “Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions,” Perspectives on Psychological Science 10, no. 3 (2015): 267-281.
Christopher W. Karvetski et al., “What do forecasting rationales reveal about thinking patterns of top geopolitical forecasters?” International Journal of Forecasting 38, no. 2 (2021): 688-704.
Welton Chang et al., “Developing expert political judgment: The impact of training and practice on judgmental accuracy in geopolitical forecasting tournaments,” Judgment and Decision Making 11, no. 2 (2016): 509-526.
Barbara Mellers et al., “Psychological Strategies for Winning a Geopolitical Forecasting Tournament,” Psychological Science 25, no. 5 (March 2014): 1-10.