In 2011, on the heels of some major forecasting failures, the U.S. government sponsored a competition to improve the “wisdom of the crowd.”
Professor Philip Tetlock of the University of Pennsylvania led a team of Good Judgement forecasters in a government-sponsored forecasting competition, and handily beat the skilled analysts of the defense intelligence community who had access to classified data, university researchers, and various other teams. Oddly, Tetlocks forecasters were not savants, did not have deep technical expertise or experience in the fields of their forecast. What they did have, however, were various traits that helped them to eliminate biases and produce better results than most people in the crowd.
Qualities that matter
If you want to become a superforecaster, there are certain qualities that matter more than others. The first is pattern recognition, which helps us to detect problems quickly and without too much thought.
The next quality is actively open-minded thinking. The idea behind this is that we’re always testing our beliefs about the world — not protecting them.
The third quality is being cognitively reflective. This is basically the idea that when you're presented with a complex problem, you don't go for the first answer that pops into your head. Rather, you slow yourself down and ask whether you've got it all, and whether the first answer — the most obvious one — is really the right one.
Finding good references for your business forecast is a challenge, so AFP is pleased to bring you additional resources and data to support your planning. Visit the AFP 2022 Planning center here.
Eliminate bias that colors our forecasts
Anchoring bias causes us to focus on a certain initial value and base our decisions on that value. The problem is, we use anchors even when we don’t need to. An excellent example of the anchoring effect is an exercise designed by Daniel Kahneman, an Israeli psychologist and economist known for his work on the psychology of judgment and decision-making, and behavioral economics.
In the exercise, one group is asked if the Eiffel Tower is more or less than 10,000 feet. The second group is asked if the Eiffel Tower is more or less than 100 feet. So, those numbers were planted in the participants’ heads when the second question was asked: What is the height of the Eiffel Tower? The first group will respond with a number in the thousands, while the second group will respond with a number in the hundreds. Because of the way the information was presented, their range of thought was limited.
“Without realizing it, we can become anchored on numbers, things we hear, the way questions get phrased. Even a number that's written on a little piece of paper can get you anchored,” said Warren Hatch, CEO of Good Judgment, in his webinar presentation, “5 Key Steps to Superforecasting.”
Five key steps to superforecasting
With the top qualities of a superforecaster in mind and a goal of eliminating bias, you and your team can improve on your forecasting with five key steps from Hatch.
1. Start with a base rate. You want to anchor in the best possible place, so you start by finding good comparison classes, then tackle the specifics of the question. For example, let’s say you attend a royal wedding, and while everyone else is charmed by the new couple, you’re wondering what the chances are that their marriage will last. To determine this, you would first think to look at the divorce rate of royal couples, but that’s not enough. They are also now celebrities, so you could add in the divorce rate among celebrity couples, and on and on. Science says the base rate is the right way to improve the accuracy of your forecast, but the art is to select the right one.
2. Record your forecasts. This achieves several positive habits: when situations resolve, you can go back and check your thinking; it provides the context for the number that you're providing; and it allows you to share your insights with others. Good forecasters, when they explain the context, will use words like “however, there's this, but on the other hand that.” Bad forecasters will use words like “moreover” that stress the narrative and drive a thesis instead of considering contradictory information. This is the kind of language Wall Street strategists use, and you want to avoid.
3. Compare. Share your reasoning with other forecasters, and take advantage of their diverse views. This requires structuring the team with different points of view and approaches so that when we come together, we do not default to whatever we all agree on — and fall victim to groupthink. Our sharing and comparing evaluate other data points.
An easy solution to minimize group-think is a time-honored process pioneered by the RAND corporation called the Delphi Method. Rather than getting anchored on a high-status individual or having a groupthink, everyone independently and anonymously offers their opinion. The optinions are then circulated, and an update is made via another round of anonymous submissions. It minimizes various types of group risks, saves time in meetings (less discussing, more submitting), and leads to better outcomes.
4. Update. As new information comes in, your views should evolve and you should be willing to change your forecast. “The payoff is significant,” said Hatch. “We did an experiment on Good Judgment about a year ago with a client. They had charter holders making forecasts on questions, and we observed the impact of updating. When they made even one update, the accuracy went up 148%. When they made two updates, 342%, three updates, even more.”
5. Keep score. When questions resolve, score the results and review your progress. Improving your forecasting is a learnable skill, and methods such as the Brier score will help you understand your strengths and opportunities.