6 Comments

I am interested in your take on how much you think about the critique of forecasting itself being harmful in places with deep uncertainty.

I had read a lot of work previously on forecasting and incorporating "knightian uncertainty". However, I never truly understood how this breaks the notion of forecasting until I started getting uncomfortable with certain public forecasts.

Essentially, you probably don't even know what you want to forecast or what will drive the forecast so probabilities could lock in a sub par world view or sub par goal.

However! Good forecasters are strong at breaking apart a problem to the key drivers (at a point in time) and setting points where they would like to "reset" their forecast or change a policy/strategy.

I mean read the steps of a dynamic adaptive pathway

Step 1: Participatory problem framing, describe the system, specify objectives,and identify uncertainties

Step 2: Assess system vulnerabilities and opportunities, and identify adaptation tipping points

Step 3: Identify contingent actions and assess their ATP conditions and timing

Step 4: Design and evaluate pathways

Step 5: Design the adaptive strategy

That sounds a lot like how you build a strong forecast + some steps for actions as a result of the forecast.

I bring this up because I just appreciate this new world view for myself and I feel your piece above just suggests that forecasting under deep uncertainty requires a wide range of possibilities. Rather than a focus on the dynamic plan to deal with the wide range of possibilities.

Expand full comment
author

It's an interesting question. I would be interested in an example of a situation where forecasting might be harmful. My intuition is that the issue isn't our forecasts themselves but what we do with them, but maybe those things aren't separable. A few thoughts of the top of my head:

- Knightian uncertainty is when the risk can't be quantified. But if forecasters can do better than random, we're not in a true state of Knightian uncertainty, because at least some of the risk can be quantified. We can't necessarily assume we're a position of Knightian uncertainty when it's not obvious how to quantify the risk. To some extent whether we really face Knightian uncertainty seems to me an empirical question that's hard to answer in advance.

- Whether or not it's truly Knightian uncertainty, I feel like essentially everything I do as a forecaster involves deep uncertainty. There are always "unknown unknowns" around any social, political, or economic question. I never know in advance what the right model to use is. The value of judgmental forecasting is precisely that we can nevertheless estimate the risk with some accuracy anyway.

- As a rule, I think more information should improve decision-making as long as we're basically rational actors. If we make a worse choice on the basis of an accurate forecast, the problem is with us, no the forecast. Forecasts themselves only seem problematic to me if they are inaccurate or overconfident—that is, if they're bad forecasts.

- I do think planning for an uncertain future require the flexibility to deal with a complex probability space of possible outcomes. I think we tend to anchor on and plan for a particular median scenario, even though the likelihood of that actual scenario is really small. That's one reason I think that forecasts need to be put into context and should consist of more than just a point estimate of the probability something does or doesn't happen by a particular time.

- I'm not sure what the alternative to forecasting is. Any decisions we make involve implicit forecasts. It seems to me that taking seriously the idea that forecasting is harmful would require us to act as blank slates with no priors. There are certainly situations when we should have very low confidence in our ability to forecast about the future. But I'm skeptical that formal forecasting is generally worse than a studied posture of uncertainty.

Expand full comment

I think that I am simply presenting a difference of emphasis but I think forecasting critiques are useful because they are often presented by very smart people! I find this is especially true when it comes to organizations and communities like the Effective Altruists, where having a deep understanding of forecasts is paramount to success. I personally underestimated the importance of forecasting critiques until I found myself feeling trapped by forecasting both at work and in EA. Why was everyone forecasting things that were so uncertain!

As a forecaster however, you are helpful if you are *explicit* that forecasting has stopped and what is required is a framework. The book attempt to define those as level 4 uncertainty. I think that It is almost tautological to say you you can't know when the knightian uncertainty is dominant in a forecast. But, there are certainly signs and I will contend that when that occurs, the forecaster should rotate to providing paths and factors instead of median estimates.

Forecasting requires a deep understanding of intuition, data and probability. Without a framework driving the forecast, you are unlikely calibrated. But good forecasters have a framework so they are in an optimal position to tell when uncertainty is dominant and a ‘rotation’ is required -- from providing median estimates to paths and factors. However, there are certain signs that can help provide guidance in this process. For example, level 4 uncertainty as defined in DMDU can be taken as an indication that further context is needed before any forecast should be presented. By being mindful of these signs and explicitly making them known when present in a forecast, forecasters can help ensure informed decisions are being made with accurate results.

From DMDU “Stochastic uncertainties are therefore among the least of our worries; their effects are swamped by uncertainties about the state of the world and human factors for which we know absolutely nothing about probability distributions and little more about the possible outcomes."

You state "Forecasts themselves only seem problematic to me if they are inaccurate or overconfident—that is, if they're bad forecasts." Well we can invert that statement and say "If a forecast will likely cause another rational forecaster to call you inaccurate or overconfident, then that forecast is problematic and has too high of uncertainty". This gives me a rough rule of thumb, forecasting as the goal isn't helpful if I behave completely differently with reasonable perturbations in the factors or in the weighting of different outcomes, or if the forecast is very sensitive to different trajectories of factors.

We will start with a made-up work example: Your statements about forecasting being harmless demonstrates that you must have not been frustrated enough by highly structured bureaucracies! They love to force you to forecast budgets and targets for projects that aren't really standalone projects, on time horizons that serve to only make the forecaster look ridiculous.

"How much money will this group make in 5 years? What is the budget required for the team?"

"Um, the desk just started so somewhere between 0 and some fraction of our largest competitor"

"So midpoint then?"

-> one year passes "So, we are comparing your trajectory to forecast ..."

*facepalm*

I think that you are specifically very good at presenting a forecast along with the model and context. Yet, I believe you are underestimating how important that context is relative to the actual forecast. Also, you are underestimating how popular forecasting without context is becoming.

AGI timelines can often lead to a sense of urgency when it comes to making decisions. This is especially true when considering the impact that these decisions could have on global health and development philanthropy, as donations may be crowded out in favour of investment in more "impactful", AI-related research. You make the statement "If we make a worse choice on the basis of an accurate forecast, the problem is with us, no the forecast." I think that you could tell me literally any number for an AGI timeline and it would be useless for my decision making. Other people, who I think are reasonable! think that tiny differences make a world of difference, this is basically the definition I set out above for deep uncertainty. Rational people disagreeing a priori on the weighting schemes.

Certainly driven by some motivated reasoning I felt that the immediate prioritization of a deeply uncertain project just felt wrong. Then I read through DMDU (which I obviously can't do justice here) and it was basically a summary of the problems I had with focusing on AGI timelines. The suggestions are roughly -

Frame the problem:

What are the driving factors

What are the triggers that show to us our framework is on-track/off-track and requires a reevaluation

What are outcomes

What are policies/strategies

Given the problem and responses, what are trade-offs and how can we handle them *through time*

Basically, it is building a model + pre-set updating rules because you know that your forecast will change *a lot* with time. Given it is changing a lot, how do your actions change as the uncertainties resolve, what are the driving factors for the resolving uncertainty. Answering those questions are all more pertinent than the median forecast or even the menu of possible outcomes, in my opinion.

It is this focus on the timeline of expected conditional forecasts that I really appreciate.

"How much money will this group make in 5 years? What is the budget required for the team?"

"If we are able to hire 3 people by May then I expect to have an estimate for next year's budget and revenue by August. If we find a promising opportunity in analysis XYZ which should be done at end of March then it would take about 12 months to begin monetizing ... , if those we can't hire those people then ..."

"When will AGI be built?"

"Capability A is likely within 10 to 30 years to AGI, if it is built then we should expect problem X to arise or problem Y to arise or problem Z. If all three are absent then we watch for warning Omega but that is much less likely. Capability B is with 5 years ..."

I know that there are groups that do present dynamic adaptive policy pathways, but it really seems to less common. Most just trying to get the odds of AGI given a timeline and multiply by probability of extinction. Instead, I feel if we can't even agree on early flags the we certainly don't have enough understanding to know how actions will change outcomes, but now I'm off track.

Again, I think this is just a different emphasis. When I read your current post, I interpreted the takeaway as suggesting that uncertainty just meant you needed to be more exploratory in total outcomes. I think actually refocusing on triggers/drivers/trajectories changes the presentation of the solution. As you mention "I think we tend to anchor on and plan for a particular median scenario, even though the likelihood of that actual scenario is really small" This does seem to cause problems when there are large tradeoffs occurring.

Expand full comment
author

Thanks. I completely agree that it makes sense to focus much more on those pivotal things that are likely to determine what future we end up with—although identifying those pivotal things is a real challenge in itself. I don't have to time to address many of the issues you raise now, but let's keep this discussion going in future threads.

Expand full comment

I noticed you didn't mention any types of strategies for dealing with forecasting in deep uncertainty. Have you read "Decision Making under Deep Uncertainty". It seems related to the post and a de Neufville contributes a chapter.

https://library.oapen.org/handle/20.500.12657/22900

Expand full comment
author

Thank you for your kind message earlier! That's actually my father's paper. I'm familiar with his engineering options work, although that kind of project planning is beyond my scope. But I think in conditions of "deep uncertainty"—when we're unsure about the boundaries of a problem, what models are appropriate, etc.—that it's important to consider issues from a wide range of different perspectives and to treat our ideas as provisional hypotheses rather than settled questions. I think it's precisely in conditions of deep uncertainty that judgmental forecasting has a clear advantage over algorithmic approaches.

Expand full comment