Super-forcasting

 

Superforecasting: The Art and Science of Prediction

A New York Times Bestseller”The most important book on decision making since Daniel Kahneman’s Thinking, Fast and Slow.”—Jason Zweig, The Wall Street Journal   Everyone would benefit from seeing further into the future, whether…

amazon.com

I went to see Phillip Tetlock at AEI last night. Tetlock built his reputation by assessing the actual ability of experts to predict complex political and societal trends. His research showed the experts were not much better, and sometimes not as good, as random chance.

Blind monkey can beat the experts in actual predictions.  Experts add value by framing questions and identifying options

The classic monkey throwing darts at the Wall Street Journal can often beat the stock pickers. This result is not a damning as it appears on the face. Experts may be good at framing questions, which is really the hardest part of decision making, and identifying options. The dark flinging monkey has a simpler problem set, one already designed by experts.

Bayesian better

Predictions could also be improved by using a Bayesian approach, i.e. continually integrating new information, and speaking in terms of probabilities rather than “could happen.” The biggest impediment to experts doing this was their dislike of being seen to be uncertain or changing their minds. The prognosticators or prognosticators should not flip flop, even if it improves their outcomes. As a result, most experts talk more in vague “could be” rather than actionable probabilities and can trim their predictions to the results.

Most social studies studies cannot be replicated

Unfortunately, the social sciences are full of bad studies. Recent research indicated that 2/3 of published studies in the social sciences could not be replicated, i.e. were probably wrong. There is signification bias at work and it is difficult to overestimate the power of preconceptions to shape perceptions. Bayesian analysis does not eliminate biases or preconceptions, but does make them explicit and so testable and so subject to modification and inspiring learning. You cannot improve if you don’t keep score. This is a score keeping.

“Good Judgement Project”

Tetlock was actually at AEI to talk about his new book, “Superforcasting: the Art and Science of Prediction,” in which he described his IARPA funded “Good Judgement Project.” It was kind of a tournament of prediction. The only criterion was the accuracy of the predictions. They started with the base rate. Recall that even a broken clock is right twice a day and a random guess will sometimes produce a correct result. For illustration, a base rate of 25% would be the expected outcome if you took a multiple choice test where all the questions had four choices. For the tournament they tried to choose things that could be known in the passage of time and not subject to lots of interpretation. They also wanted events in the “Goldilocks zone,” i.e. not something so simple that results could be predicted with certainty using equations and past experience, and not something completely random like a fair roulette wheel, where any patterns you identified would be mistakes.

They were looking for elite “superforcasters.” Tetlock joked that he was not trying to be inclusive in the results, since some people would just be better than others, but the tournament was open to all with the best rising to the top.

Let them try and see who does it best

The tournament was a proof of concept. Predictions can be made better, although never perfect. Success superforcasters tended to be quick to learn from their mistakes and adjust and took into account a wider variety of information sources. It really does help to have discordant and even unpleasant information. You cannot make sound decisions if you are afraid to offend someone. But recall that the person with the extreme view is sometimes right and usually useful, but for the most part the probabilities work, i.e. the random weirdo is unlikely to be Einstein.

In many ways, the new science or methods of forecasting are disruptive work against established experts and so difficult to plant in an organization that has a hierarchy. The best results may come from people of lower status. They have the advantage of not having bought into the current reality.

Don’t mistake ONE common man for THE common man

Again, we do not want to take more from this lesson than it has to teach. The headline that “Common folks beat the experts” is misleading. THE common folks (in the aggregate of the masses) can produce lots of good ideas but the chances of A (i.e. any particular) common man doing so is a low probability outcome. If you take too much advice from the common man you met on the street, you will soon get grief – and deserve the grief you get.

Anyway, I bought the book (paid more for the honor of buying it at the event) and will read it. What I heard tracks with lots of what I have read about decision-making. Of course, it might be a false correlation, since Tetlock has been a source for many other things I have read and/or much of what I have read (guys like Kahneman and Tversky) have influenced Tetlock. I suppose they would call that confirmation bias.

 

This entry was posted in Book Reviews. Bookmark the permalink.

Comments are closed.