Telling the Future is going paid starting today. Everything I have written will remain freely available to everyone. But producing this newsletter takes time and my ability to continue doing it beyond the end of this year depends on readers like you. If you enjoy Telling the Future, I hope you will support my work by buying a paid subscription.
We think too highly of our own opinions. Of course, we think our opinions are correct; that’s just what having an opinion means. If we didn’t think our opinion was right, we’d hold a different opinion. But if we want to make accurately calibrated forecasts—so that what we’re predicting happens about as often as we say it will—we need to subject our own ideas to the same skepticism with which we’d view the ideas of others.
We should be particularly skeptical of our own ideas when we’re forecasting human events. We may be very smart, but predicting the behavior of a complex system—in which outcomes may depend on slight variations in a large number of variables—is not something even the smartest person can work out analytically. In human affairs, we often don’t have—and in practice can’t have—enough information to reliably project the current state of the world forward in time. In those cases, what’s likely to happen is often primarily an empirical question, which we can confidently answer only by seeing what has happened in comparable situations. History from this perspective is the record of past experiments in the behavior of human societies; it’s our best evidence for how future experiments are likely to turn out.
Where history isn’t likely to be a good guide to future events, forecasters can only speculate about what’s likely to happen next. This was the case for many of the questions asked in last year’s Existential-risk Persuasion Tournament (XPT). Forecasters couldn’t look back to the previous times humans have produced machines with intelligence comparable to our own, for example, because human-level machine intelligence is unprecedented. But we also need to recognize how limited our ability is to guess at possible futures this way. Extraordinary events like the development of AI are novel experiments with a wide range of plausible outcomes. I’m personally skeptical about parts of the AI catastrophe story that people often tell—that there won’t be diminishing returns to increasing intelligence, that AI will be strongly motivated to act independently of humans, that AI will have to be perfectly aligned with human values to avoid conflict, and so on—but anyone who says the outcome of AI development is certain lacks imagination.
That doesn’t mean that all forecasts about novel situations are equally good. As Carl Sagan famously said—and the more skeptical forecasters in the XPT pointed out—extraordinary claims require extraordinary evidence. When people predict radical and unprecedented events they’re usually wrong. We need to take the outside view and recognize that our extraordinary predictions are likely wrong too. Overconfidence in our ability to project future events leads to confident predictions that humans—in spite of the fact that the global population is greater now than at any other time in history—are actually on the verge of extinction. We certainly shouldn’t dismiss such concerns, but speculative reasoning isn’t enough evidence to make a prediction like that with confidence. We may believe we have compelling reasons for thinking this time is different, but so does everyone else who confidently makes extraordinary claims.
We need, in other words, to account for our own fallibility. Even in extraordinary circumstances, our default expectation—in the absence of strong, clear evidence to the contrary—should be that the future will roughly resemble the past. Disregarding the past puts too much weight on our limited ability to project the future on the basis of our inevitably flawed reasoning. Even when some kind of extraordinary outcome is likely—as probably is the case with the development of AI—it may be extremely hard to say with confidence what specifically is likely to happen. If we want to predict the future, we have to be intelligent about the limits of our intelligence. Good forecasters are the ones who know how little they know.
My thanks to AI Supremacy for including Telling the Future on its list of newsletters to read on AI issues. Superforecaster Kjirste Morrell wrote about her experience participating in the XPT, and superforecaster explained in his Substack why newsletter why he thinks catastrophic risk from AI is overestimated. If you find Telling the Future valuable, you can support it by buying a paid subscription for a few dollars a month. My heartfelt thanks to those readers who have already generously pledged their support. If you can’t afford to support Telling the Future with a paid subscription, you can always support my work by sharing it with others!
Super! Congrats on going paid & I'm honored that you mentioned my post.
You know, in addition to the risk of our own fallibility, I was thinking about the way we might get over-identified with our forecasts. I like to be able to adapt to conditions as they change, but I imagine that might be more difficult if I'm categorized in one way or another. Maybe some of the time when people project certainty, they're defending their past views.
Thank you for the mention Robert and congratulations on going paid!