Superforecasting

From Open Source Ecology
Revision as of 05:49, 25 January 2026 by Marcin (talk | contribs) (→‎Notes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Book. https://esotericlibrary.weebly.com/uploads/5/0/7/7/5077636/philip_e._tetlock_-_superforecasting_the_art_and_science_of_prediction.pdf

https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock

Notes

  • Foresight is real. Some people—people like Bill Flack—have it in spades.

They aren’t gurus or oracles with the power to peer decades into the future, but they do have a real, measurable skill at judging how high-stakes events are likely to unfold three months, six months, a year, or a year and a half in advance.

  • Try answering this: “A bat and ball together cost $1.10. The bat costs a dollar more than the ball. How much does

the ball cost?” If you are like just about everybody who has ever read this famous question, you instantly had an answer: “Ten cents.” You didn’t think carefully to get that. You didn’t calculate anything. It just appeared. For that, you can thank System 1. Quick and easy, no effort required. But is “ten cents” right? Think about the question carefully. You probably realized a couple of things. First, conscious thought is demanding. Thinking the problem through requires sustained focus and takes an eternity relative to the snap judgment you got with a quick look. Second, “ten cents” is wrong. It feels right. But it’s wrong. In fact, it’s obviously wrong—if you give it a sober second thought. The bat-and-ball question is one item in an ingenious psychological measure, the Cognitive Reflection Test, which has shown that most people—including very smart people—aren’t very reflective. They read the question, think “ten cents,” and scribble down “ten cents” as their final answer without thinking carefully. So they never discover the mistake, let alone come up with the correct answer (five cents). That is normal human behavior. We tend to go with strong hunches. System 1 follows a primitive psycho-logic: if it feels true, it is.

  • Consider that if an

intelligence agency says there is a 65% chance that an event will happen, it risks being pilloried if it does not—and because the forecast itself says there is a 35% chance it will not happen, that’s a big risk. So what’s the safe thing to do? Stick with elastic language. Forecasters who use “a fair chance” and “a serious possibility” can even make the wrong- side-of-maybe fallacy work for them: If the event happens, “a fair chance” can retroactively be stretched to mean something considerably bigger than 50%—so the forecaster nailed it. If it doesn’t happen, it can be shrunk to something much smaller than 50%—and again the forecaster nailed it. With perverse incentives like these, it’s no wonder people prefer rubbery words over firm numbers.

  • By contrast, meaningful accountability requires more than getting

upset when something goes awry. It requires systematic tracking of accuracy—for all the reasons laid out earlier. But the intelligence community’s forecasts have never been systematically assessed.