Nicholas Kristof of the NYT writes an amusing piece on the predictions of experts. He's interested in the so-called "Dr. Fox effect", based originally on a series of psychology experiments where a phony expert gave well-received but meaningless presentations. In more general form the Fox Effect refers to the tendency of otherwise savvy consumers of information to over-value the predictive capabilities of people who have educational credentials.
Classic example? Of course we all know: the economists! Other sooth sayers include scientists and social scientists (sociologists, political scientists) who model the past behavior of complex systems and present their conclusions as having predictive -- not just descriptive -- authority. Who will be a world power a decade from now? Dr. Fox will tell you. Will we have a shortage of food in 25 years? Will the world witness a population explosion? A resurgent Russia? Chinese economic dominance? A nuclear war? (I would add: will Florida be under water from Global Warming in two decades?!) Dr. Fox will tell you. He's got the magical ability to take descriptions of the past, precisely cast in mathematical language, and transform them into predictions of the future.
Philip Tetlock, a Cal Berkely professor -- the "expert on experts" as Kristof calls him -- wrote a book on the Fox Effect, "Expert Political Judgement", in 2005. In it he tracked two decades of predictions, 82,000 in all, from 284 experts. The predictions were tagged by those in the experts' field of study and "on subjects that they knew little about." I'll let Kristof tell you the result:
"...The predictions of experts were, on average, only a tiny bit better than random guesses — the equivalent of a chimpanzee throwing darts at a board."
That's striking enough. I'll add some comments of my own:
(1) People reason: "If not the expert, then who?", which ignores the fact that judgement and common sense are often more valuable than credentials when it comes to complicated inferences about the future. Indeed, Tetlock notes that the one key indicator of poor predictive performance was fame; those experts who were widely recognized as such, consistently did worse.
(2) Complex systems -- like the economy -- are scary as hell, because no one (literally no one) understands how things will turn out. And, unfortunately, most of the world we care about is complex in this sense. There aren't any "Diffy Qs", as physics students say, to tell us some outcome at time t + n for any n with much of a value at all. But the experts make us feel better. They talk in sophisticated language, they explain what happened in the past (note that with regard to past events, they really are experts), and they tell us what it means for the future. We want to know what's coming around the bend, and some people seem capable of telling us.
We just don't notice that they keep getting it wrong. They don't know what will happen tomorrow, anymore than you.
Read and be bummed out by our lack of predictive capabilities here.
Monday, March 30, 2009
Subscribe to:
Post Comments (Atom)
3 comments:
I'd like to better understand what kinds of distributions are on the the set of possible outcomes. (and the topic area) How do they go about comparing with random guessing? (If I predict that GDP growth will be over 3% in the last quarter, what does a comparable random guess look like?)
From the article it appears that the skepticism you propose in (2) isn't fully justified. Doesn't the book contend that "foxes", i.e., those who are less bombastic and dogmatic but more likely to recognize complexity, be pragmatic, and adjust as appropriate are in fact more likely to get things right, or maybe that's what you're pointing to in (1).
yeah, that's true, Tetlock introduces the Dr. Fox Effect to explain how experts get things wrong, and then introduces hedgehogs as the really flawed expert, while the fox is more likely to get things right. That's interesting.
Maybe Krisof at NYT needs to be more clear! But Tetlock's findings about the accuracy of expert opinions are the point of the piece, and here the evidence is that they are often wrong. I guess "just better than random" means something however.
sorry I didn't catch the question about possible outcomes. you'll need to read the book, or do some extra research to get that question answered. let me know if you do this...
Post a Comment