January 14, 2025
What Forecasts Can't Tell You
A good forecast is not a prediction. It's a structured way of being honest about what you don't know.
We build forecasting software. So we think about this a lot: what is a forecast actually for?
The naive answer is that a forecast tells you what will happen. But that’s not right, and believing it leads to bad decisions. A forecast is a probability distribution over possible futures. It tells you what’s likely, based on the patterns in your data, under the assumption that those patterns will continue to hold.
That last clause is where most forecasts fail — not in the math, but in the assumption.
The base rate problem
Every forecasting model is, at its core, an argument from analogy. “Things have behaved this way in the past, so they will probably behave this way in the future.” This works extraordinarily well when the analogy holds. It fails catastrophically when the underlying process changes — when you enter a new market, when a competitor does something unexpected, when a pandemic reshapes consumer behavior overnight.
The model doesn’t know this has happened. It continues to reason from the old patterns. The numbers look confident. They are confidently wrong.
This is what we call the base rate problem: your historical data represents a world that may no longer exist. The longer ago some of that data was collected, the more likely it is to describe conditions that are no longer relevant. A five-year demand forecast trained on pre-2020 retail data contains within it some model of consumer behavior that may have simply ceased to exist.
What to do about it
The answer is not to distrust forecasts. It’s to use them correctly — as inputs to judgment, not replacements for it.
A good forecast tells you: if nothing structural changes, here is the range of outcomes you should expect, and here is how confident you should be in that range. It quantifies your uncertainty rather than hiding it. That’s genuinely useful. It lets you make decisions with your eyes open.
But a good forecast can’t tell you whether something structural will change. That requires domain knowledge, market intuition, and the kind of qualitative reasoning that doesn’t compress into a training set.
The companies we’ve seen use forecasting well treat the model as a first opinion — the baseline to argue against. Their analysts don’t ask “what does the model say?” and then act on it. They ask “what does the model say, and where do we think it’s wrong, and why?”
That’s a harder process. It requires people who understand both the model and the business. But it’s the only version that actually works, because it combines what machines do well (processing large amounts of historical data without fatigue) with what people do well (knowing when the past doesn’t apply).
A good forecast is not a prediction. It’s a structured way of being honest about what you don’t know.