Articles

Embracing Multiple Points of View in Financial Forecasting

  • By Brooke Ballenger
  • Published: 3/30/2021

financial gorwth_article
People often trust precise forecasts more than vague forecasts because precision is associated with knowledge and expertise. But what if data science and analytics trap organizations in a deductive approach to planning and managing the business? Deductive models begin with assumptions and inputs, create calculations, and predict an output. Perhaps there is a different approach to consider.

In a recent AFP FinNext Virtual session, Scientific and Precise but Wrong: A Probabilistic Approach to Forecasting, Danny Bharat, vice president of Planning Analytics at American Tire Distributors, and Dr. Andrew Brooks, senior director of Data Science at Torqata, discuss how to simplify models to provide actionable insights and plan in uncertain futures.

Forecasting as a talent

In the 1950s, computer-based modeling played a big factor in improving forecasting, with more tools and the ability to handle data. As the market itself continued to grow, forecasting activities continued to evolve. However, according to Bharat and Brooks, forecasting is not a mechanical black box: It is a talent. “Some domains work really well with big data and the predictive elements, but for most companies and users, forecasting is still a bit of an art,” said Bharat. “Finance teams drive a lot of that, and it is a skill that can be acquired.”

Research demonstrates that individuals can become empirically better at forecasting. Some attributes of “super forecasters” include having a growth mindset, statistical fluency, a cautious and nuanced approach, healthy skepticism, and the ability to synthesize and think in gradations.

Uncertainty and precision

According to results from a 20-year longitudinal study on forecasting, overall accuracy has been decreasing. Some of the possible explanations for this include more complex supply chains, product proliferation, and decline in familiarity with forecasting technique due to lack of training. “Overall forecast accuracy is also decreasing because now we are looking out on longer-term horizons,” said Brooks. “Supply chains are much more complex, and we have a lot more products in different places. We also know that as you get more granular, that noise can more easily influence your forecast and make it difficult.”

Most people expect highly confident experts to give more precise predictions, but forecasts of future outcomes can be given with different degrees of precision. For example, Bharat and Brooks explain how one climate scientist may describe a forecast as “temperature increase of 3-4 degrees,” while another may describe it as “temperature increase of 2-6 degrees.” A person would be tempted to trust the 3-4 degree forecast because this narrower range is precise. However, Bharat notes that statistically, there is a negative relationship between precision and probabilities. “The more precise you are, or the narrower range you provide, the lower the probability that you will hit that or achieve that, or accomplish that, depending on what you're trying to forecast,” said Bharat.

Several studies also show that participants ignore confidence levels when forecasting due to deficient understanding, conversational norms, domain and culture, and human elements.

Furthermore, Bharat and Brooks explain the importance of avoiding Simpson’s paradox, which they describe as “a phenomenon in probability and statistics in which a trend appears in several different groups of data but disappears or reverses when these groups are combined.” Ways to avoid this paradox include: 1) improved diagnostics, such as additional queries, generating two-way cross frequencies, and correlation analyses, and 2) using domain and industry knowledge, such as leveraging historical knowledge and considering industry-wide trends and characteristics.

Despite its challenges, measuring uncertainty allows an individual to balance risk versus reward. “Measuring uncertainty is incredibly important because we know no predictive model is perfect all the time,” said Brooks. “Point estimates need to include probability or confidence intervals so that you can make the best decision.”

Leveraging inductive thinking

Bharat and Brooks emphasize that the key to successfully navigating an uncertain future is inductive thinking, which creates rules and hypotheses based on observed data. Since the data is dispersed, inductive models rely on probability-based approaches, such as Bayesian measurement, Monte Carlo simulations and scenario planning.

Deductive and inductive thinking are both useful, and in fact reinforcing, when forecasting. Deductive thinking is used to solve problems through specific, backed-up solutions, while inductive thinking includes making a more generalized conclusion based off an experience, which may become the new deductive model. “It is making sense of that inductive thinking to come up with something new, something that is out of the box,” said Bharat. “You need to identify opportunities with a combination of inductive and deductive thinking, and I would encourage people to research and build up some skills in creative thinking.

Bharat and Brooks conclude that at the end of the day, forecasting is nothing more (nor less) than the systematic and disciplined application of common sense. In addition, they provide the six main rules to effective forecasting1:

  1. Define a cone of uncertainty.
  2. Look for the S curve.
  3. Embrace the things that do not fit.
  4. Hold strong opinions weakly.
  5. Look back twice as far as forward.
  6. Know when not to make a forecast.

Learn more about FP&A’s multiple points of view, and download the 2020 AFP FP&A Survey: The Technology and Data Platform Supporting Finance Decisions, underwritten by Workday.


1: Six Rules for Effective Forecasting. (2014, August 01). Retrieved April 2021, from https://hbr.org/2007/07/six-rules-for-effective-forecasting


Copyright © 2024 Association for Financial Professionals, Inc.
All rights reserved.