One of the best ways to read outside your echo chamber and comfort zone, apart from publicly challenging yourself to read different things, is to be part of a book group. I’ve found myself part of a work-related book group that is meeting quarterly, and this was the latest (Hillbilly Elegy was one we read previously). You can tell that this is a serious bunch by the fact that we have Evicted, Sapiens and The Righteous Mind to follow later in the year….I’m going to need to up my critical faculties game.
Anyway, Superforecasting is all about how we make better predictions and forecasts, stemming from a project that Tetlock ran which tracked forecasts over time. His central premise, which is difficult to disagree with, is that ‘experts’ tend to make forecasts but they tend to a) not be very clear (or measurable) and b) no-one follows up or tracks to see if they come true. So, for example, we tend to say ‘name of expert predicted the 2008 crash’ but we don’t look at what they actually said, nor all of their other predictions and forecasts. Tetlock’s scientific approach to this demonstrated that experts were generally not very good at forecasting the future (even short and medium term), and then he set out trying to find ways to do that better – this involved building a team of people who could be tracked over time, and who took part in a project (the Good Judgement Project) to participate in a competition established by the American Intelligence community.
This is where, for me, it got more interesting: ‘superforecasters’ tend to be (apparently) less ideological, numerate and educated, more open to a mix of external perspectives, able to admit openly what they got wrong, consistently incrementally update their view depending on new information, and absorb and synthesise significant amounts of that information. None of this is surprising, although the fact that individuals and teams of individuals displaying these characteristics could outperform experts and market-based approaches is perhaps more so.
From an organisational perspective, I was interested in the section that contrasted what ‘leaders’ have to do (set vision, make swift decisions, display confidence) with these ‘forecaster’ characteristics (be pragmatic, absorb and analyse information, be open to being wrong and constantly update). In a sense, this encapsulates some of the challenges of running an organisation: what helps build coherence, team and a sense of shared direction…in a context which is constantly changing, in flux and requires continuous adjustment. I think there is something interesting here that could have been pursued further, for example about where the leader-forecaster does more clearly overlap (accountability, belief in improvement, determination). Equally, I’d be interesting in how this relates to the fact that people and organisations can seek to ‘create’ their future, and how they might use forecasting to do so.
[A side-bar – I was also intrigued to know how the superforecasters had done with regard to Brexit & Trump, particularly in this world of ‘fake news’ and misinformation: how do people who rely on information from online sources fare in this scenario? Tim Harford looks at this a little in this article called ‘Why forecasters failed to predict Trump’s victory‘]
My main beef with the book is that it’s not enormously well written; they clearly brought in Dan Gardner as co-author to try and cut through Tetlock’s more complex academic language, but it doesn’t really hold together as a coherent whole in tone or structure, and by the later pages (in which it tails off) I was slightly despairing of what other noun we would be putting the word ‘super’ in front of for a chapter title. It could also have quite easily been 50-75 pages shorter, and (for me) the aspects on how we can actually use these insights needed to be sharper. There are other parts of the methodology I question, too, such as the lack of a ‘real’ control group (his control comes from other forecasters), and the chapter where his central thesis is challenged by views from Daniel Kahneman and Nicholas Taleb left me persuaded that those alternative perspectives had merit – and that I might be better reading their books…
All in all, it left me intrigued and with some insights, but a little unsatisfied. There is a lot of interesting detail and evidence in the book, and plenty to chew on; and for the first 100 pages or so, I was pretty gripped. But it tails off into repetition and less fascinating perspectives, and fails somewhat to deliver on that initial premise (and promise) of helping us understand the art and science of forecasting, and be able to use it in our own work and lives.