While more companies are trying to perfect the science of forecasting with techniques like driver-based modeling, there’s one thing that continues to stand in the way of making pure data-driven decisions: human nature. You’ve all faced this situation. The numbers come back all nice and tidy, but managers refuse to take them at face value and continue to massage them and the forecast to sync it with the kind of business decisions they’re trying to make. They sandbag. They lowball. Or, conversely, they present an overly rosy picture because they’re asking for more money for a pet project.
All this makes it hard for finance executives to decide what numbers to put in the “official” forecast that management gets to see, and ultimately, the budget. Just how much time have you spent negotiating these numbers back and forth? The budget cycle includes at least three iterations. Even at companies where there’s a driver-based forecasting model in place that should, theoretically, take the politics out of the process, there’s still quite a bit of haggling.
“Companies make a lot of small judgmental adjustments [even] to model-based forecasts that waste time and incrementally hurt the forecast,” said Paul Goodwin, Professor Emeritus of Management Science at the University of Bath in England, who’s written extensively on the use of judgement in business forecasting.
Seven ways to insulate the forecasting process
That does not mean finance executives should give up, or in. There are at least seven things you can do to protect the integrity of the forecast.
- Appoint an independent partner. Sales forecasts reflect sales expectations. Marketing forecasts reflect marketing expectations. To get at an objective and accurate forecast, you should appoint an independent group to aggregate and run the forecasting numbers. That’s the chief role of the FP&A group.
- Separate forecast from decision. According to Goodwin, one way finance can ensure the forecast remains unbiased is to separate it from the act of decision-making. “The forecast should be a genuine expectation of what will happen based on the information you have at the time,” he said. The consequences are the decision. If forecasters consider the consequences, personal (I may get fired if my forecast is wrong), or corporate (I will have to deliver on that forecast) that consideration will taint the numbers no matter how sophisticated the models.
- Concentrate on history. You’ve probably experienced a range of reasons managers come up with to explain away a recent random change in numbers. “There’s always this expectation that next week will be different,” Goodwin said. So look at historical data when possible to dispel notions of a “brand new world” just because there’s a twitch in the grass.
- Design a careful feedback loop. Just telling managers their forecast was less accurate than the model is not enough. Instead, provide managers with more actionable information. For example, it’s more useful to let managers know that they’ve been consistently over-forecasting by 10 percent or under-forecasting by 5 percent. That’s something they can act on; it’s bias feedback vs. accuracy. In addition, if possible, provide feedback on what particular elements are over-weighted in the forecast.
- Clearly define the forecast. Another way to combat biases in human judgement is to clearly define what the forecast is—and isn’t. You should tell managers that the forecast is strictly the best expectation of what will happen.
- Combine multiple views. You can also take a different route and collect anonymous forecasts from multiple people. In face-to-face meetings, people are often shy or concerned about contradicting the higher ups. The one with the loudest voice wins.
- Design the right support system. Finally, make sure you allow for the human element in using your forecasting tool. Most solutions rely on ever-more-precise algorithms. That approach may produce more accurate numbers but doesn’t guarantee that these numbers will make it into the decision-making process. Let business partners have an input into the process as well.
As more finance groups invest in new technologies and at the same time face demands from management to deliver actionable information faster and more frequently, you’re challenge is to protect the integrity of the forecasting process. While tools that allow more scientific forecasts, using algorithms, scenario analysis and Monte Carlo simulations are becoming more prevalent, you cannot ignore behavioral science findings about how we all make decisions and view the future. The challenge is to identify the bias and isolate it, to engender a more data-driven culture.