A Nobel Prize-Winning Economist’s Four Steps to Minimize Forecast Bias
A problem with Demand Collaboration is that we humans naturally tend to overestimate our own forecasting skills and underestimate or even distrust computer generated statistical forecasts. We ignore the statistical probabilities of outcomes and assign great significance to the few facts we know, ignoring the impact of what we don’t know.
For instance, if we believe a new product has “the right stuff”, we might give it a 90% chance of a successful launch, minimizing or even ignoring the fact that in the past only 20% of new product launches were successful. But as good forecasters, we should prefer not to override the 20% prediction with a 90% prediction, but to adjust the statistical forecast, say up from 20% to 50%.
Nobel Prize-winning Daniel Kahneman’s book Thinking Fast and Slow offers a simple four step process to do just that, hence minimizing this forecast bias. The process consists of identifying baseline demand, making an intuitive prediction based on the available evidence (such as a promotion or new customer), and then adjusting the baseline forecast based on your estimate of how much of the total demand is driven by the factors that went into creating the intuitive forecast. (While Kahneman talks about forecasting in general, this blog will describe his approach specifically for demand forecasting.) Written as steps, the process looks like this:
- Determine the baseline forecast using statistical forecasting techniques
- Create an intuitive forecast using the available evidence
- Estimate the percentage of the demand factors considered by your intuitive forecast
- Adjust the baseline demand forecast towards the intuitive forecast by that percentage. So if your intuitive forecast considers 40% of all the factors that drive demand, then you adjust the baseline forecast 40% towards the intuitive forecast.
Humans tend to naturally overestimate their own judgment, so will often reject the computer-based baseline forecast and simply override it with their own estimate (Step 2 above). In contrast, Kahneman’s approach finds a forecast that balances the baseline and the manual forecasts. Let’s look at each step in more detail.
Step 1 – Baseline Forecast
The baseline forecast should be the best possible forecast that can be determined from statistical forecasting techniques. Of course not all forecasting techniques are equal. Using a strong approach that yields consistently good results will be the most helpful. For our baseline forecasts, we use a demand modeling approach based on a single proprietary stochastic model.
Beware of overemphasizing recent results using something like a “best fit” technique that switches to the technique that most closely predicted the most recent demand. This “overfitting” approach almost never works because it is essentially “chasing” the forecast. It’s akin to switching your investment portfolio every month to the stock that performed best last month. There will almost always be one outlier that does exceptionally well last month, only to regress to mean in the following month. By chasing outliers, the system is picking the technique that previously performed exceptionally well, without enjoying the benefit of the past performance.
Step 2 – Intuitive Forecast
This is the equivalent of a manual override. It is a forecast based on your evaluation of the evidence at hand about significant causal factors not already incorporated into the baseline forecast. It is usually based on the intuitive estimate of a planner, salesperson or manager and is based on the impact of a demand driver such as a promotion, new product introduction or new customer. It might look like “this type of promotion usually generates demand for 5000 cases of product” or “a product this good usually generates orders for 12,000 units in the first month.”
Step 3 – Correlation Estimate
The trick here is to determine to what degree the causal factors used to create the intuitive forecast capture all the factors that drive demand, expressed as a percentage. For example, a manual forecast may be based primarily on the quality of the promotion itself, which may really only account for one third of the story, and ignore other important factors such as the type of product, the latest weather patterns or competitors’ activities. Together you might estimate that all these other factors account for the other two thirds of the demand. Since your intuitive forecast accounts for only on one third of the demand, you would estimate correlation at 33%.
Step 4 – Adjusted Forecast
The adjusted forecast starts with the baseline forecast, then moves it to the intuitive forecast by the correlation percentage. So if the baseline forecast is 200 (step 1), the intuitive forecast (step 2) is 300, and the correlation is 33% (step 3), then the adjusted forecast is 233. The reason for starting from the baseline forecast is not because the baseline necessarily accounts for these factors better than the human forecast. But it mitigates the impact of the human forecast, reducing the over emphasis on the one factor that is inherent in the human forecast. The forecast builds on the planner’s or manager’s intuition, but “regresses it to the mean”.
The benefit of this approach is that it reduces the unrealistic “certainty” of a manual override. It finds a medium point between a strict statistical approach and human judgment. While not perfect, it tends to improve the baseline forecast and substantially outperforms a human forecast. Or as Kahneman explains, it reduces forecast bias and increases forecast because “you still make errors when your predictions are unbiased, but the errors are smaller and do not favor either high or low outcomes.”
Another benefit is that merging the algorithm-based forecast with human judgment helps people involved in Demand Planning trust the system. Studies have shown that forecasters often will not trust the computer, even when historical evidence shows that the computer outperforms their own forecast. This is known as “algorithm aversion”. But when a combination of computer forecast and human judgment is allowed, forecasters are up to three times more likely to trust the computer. This reduces the chances of unwarranted manual overrides that ignore the computer generated-baseline forecast.
When trying to make predictions, humans tend to put way too much emphasis on the few facts available to them, while ignoring or minimizing other factors that could improve forecast accuracy. Kahneman’s four step process addresses this problem.