Could you explain the background to the book and the research it’s based on?
“The book came out of Dr Philip Tetlock’s work, who is one of the world’s leading forecasting researchers. Most organisations do a lot of forecasting and have little idea how good that forecasting is, or if they could make it better. This became the basis for further research, and the book.”
What do super- forecasters have that makes them better forecasters than the rest?
“Phil's initial research looked at people whose job, to some degree, depended on making forecasts. So economists, political scientists, intelligence analysts, journalists. He made them make enormous number of forecasts, and found the average forecast was about as a good as a chimpanzee throwing a dart, i.e. random guessing.
“But this is an example of where the average obscures the reality, because there were two statistically distinguishable groups. One did considerably worse than random guessing, but another did considerably better, which is to say that they had real foresight. The question is what distinguishes the two groups.”
You'll be amazed how many sophisticated individuals judge the quality of a forecaster based on their title, how famous they are, how many books they sell and how many times they’ve been to Davos
And what distinguishes super-forecasters from the inferior forecasters?
“PhDs or access to classified information didn’t make the real difference; it was the style of thinking. Phil labelled them as foxes and hedgehogs after an Ancient Greek poet, Archilochus, who said: ‘the fox knows many things, but the hedgehog knows one big thing’.
“Hedgehogs had one big idea and they weren't interested in hearing other perspectives. They wanted to keep it simple and elegant and were more likely to use words like ‘impossible’, or ‘certain’. Conversely, foxes didn't have one big idea. They wanted to see other perspectives, hear other people’s views and learn about other ways of thinking, which made their analysis complex and messy making them much less likely to use words such as ‘impossible’ or ‘certain’.”
So if the foxes are better forecasters, why do we only hear from the hedgehogs?
“Data shows an inverse correlation between fame and accuracy: the more famous an expert is, the less accurate their forecasts are. That happens for a very specific reason. Hedgehogs are the kind of people you see on television who have one idea and are very strong and confident in voicing their opinion – that makes a great TV guest.
“Foxes are people who say: ‘Well, I don’t have one big idea, but I have many ideas, and I’m thinking that there are multiple factors at work’ – that person makes a terrible TV guest.’”
Is that a problem if you’re trying to improve forecasting?
“It’s a huge problem. The hedgehog expert gives you a nice, simple, clear, logical story that ends with ‘I’m sure’. That’s psychologically satisfying and what we crave. “The fox expert says: ‘Well, there are many factors involved and I’m not sure how this is going to work out.’ That drives us crazy, because it isn’t psychologically satisfying. This is the deep paradox at work. The person who recognises that uncertainty cannot be eliminated is more likely to be an accurate forecaster than the person who is dead sure they’ve got it all figured out.”
Are there things that are simply too big to forecast? Like the impact of Brexit?
“Usually with questions like this, we have people in the media taking contrary positions. They marshal their arguments and shout at each other. ‘Brexit will be a disaster.’ ‘Brexit will go swimmingly.’
“We advocate breaking big issues into finer-grained questions like: ‘If Brexit negotiations begin by this date, will the pound be above or below this level by the following date?’ That one question doesn’t settle it but if you ask dozens of fine-grained questions like that, you can aggregate the answers and test whether Brexit works out well or poorly.”
That only tells you who was right in retrospect, not who will be right going forward.
“That’s true. But if you face a similar circumstance, and if you have somebody who has demonstrated, with evidence, that they understood the dynamics that were at work last time, that’s probably somebody you should listen to the next time there’s an analogous situation.
“Conversely, if you have somebody who clearly didn’t have a clue what was happening, that’s probably somebody you shouldn’t be getting your forecasts from. That sounds blindingly obvious, but you’ll be amazed at how many sophisticated individuals judge the quality of a forecaster based on their title, how many books they sell, how famous they are and how many times they’ve been to Davos.”
Can poor forecasters learn to be good forecasters?
“Absolutely. Even if you are not naturally inclined to think like a fox, and in probabilistic terms you can make yourself do it because you know how it works, you will get the results. It’s a question of what you do, not what you are.”
If you could be a banker for a day, what would you do regarding forecasting?
“A bank is in a wonderful position to improve forecasting, because it is awash in numbers and making forecasts all the time. So I would advise any banker to create an environment where they’re getting feedback about their forecasts. You should set up a system that gives you that clarity of feedback.
“Phil uses the term ‘forecasting tournaments’. If I were in charge of a bank, I’d have the equivalent of an internal forecasting tournament running all the time.
Read the full interview here