Call an expert or toss a coin?

Louis Menand reviewed Philip Tetlock’s new book, “Expert Political Judgment: How Good Is It? How Can We Know?” which summarizes a twenty year study of people who make prediction their business. In short, their predictions are “worse than dart-throwing monkeys.”

This is good news for strategists using future planning tools like scenario planning: they don’t need to be experts in order to find plausible (as opposed to probable) stories of the future. Unfortunately, the distinction between a futurist and an expert may be lost on many.

Here’s my favorite bits from Menand’s article:

[Experts] have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake.

“Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.”

“Expert Political Judgment” is just one of more than a hundred studies that have pitted experts against statistical or actuarial formulas, and in almost all of those studies the people either do no better than the formulas or do worse.

The experts’ trouble in Tetlock’s study is exactly the trouble that all human beings have: we fall in love with our hunches, and we really, really hate to be wrong.

Most people tend to dismiss new information that doesn’t fit with what they already believe. Tetlock found that his experts used a double standard: they were much tougher in assessing the validity of information that undercut their theory than they were in crediting information that supported it… In the terms of Karl Popper’s famous example, to verify our intuition that all swans are white we look for lots more white swans, when what we should really be looking for is one black swan.

…like most of us, experts violate a fundamental rule of probabilities by tending to find scenarios with more variables more likely. If a prediction needs two independent things to happen in order for it to be true, its probability is the product of the probability of each of the things it depends on.

Tetlock: “Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess.”

In world affairs, parsimony may be a liability—but, even there, there can be traps in the kind of highly integrative thinking that is characteristic of foxes.