Only give your system 1 credence if you’ve had the opportunity to train your pattern matching based on cues. Even then, you should double-check rigorously
- Many forecasts aren’t really forecasts. “Some chance”, “high chance”, “high impact”. We need probabilities and clear wording.
- Be both an information aggregator and a perspective aggregator (the second is for games where you have to beat the crowd)
- Just because a forecast was wrong doesn’t mean that it was unreasonable or a bad prediction. This is because it’s hard to answer “was it reasonable” so we replace it with “was it correct”
- Take the outside view and then the inside view. Be self-critical and consult + think from several perspectives
- Probabilistic thinking is opposed w/ it was meant to happen ideology. Non fate-based thinking is correlated w/ higher forecasting scores
- Mini-handbook halfway through
- Unpack the question into components
- Distinguish as sharply as you can between the known and unknown and leave no assumption unscrutinized
- Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena.
- Then adopt the inside view that plays up the uniqueness of the problem
- Also explore the similarities and differences between your views and those of others— and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Ctrl-f wisdom of crowds Thinking, Fast and Slow
- Synthesize all these different views into a single vision as acute as that of a dragonfly.
- Express your judgement as precisely as you can, using a finely grained scale of probability
- When should you update in response to new information?
- You have to carefully manage between overreaction and underreaction.
- For this, do many small updates instead of many large updates or a few small ones
- P (H | D)/P(-H | D) = P (D | H) x P (D | -H) X P(H)/P(-H): Posterior Odds = Likelihood Ratios x Prior Odds
- None of this should be set in stone “rather break the rules than make a barbarous forecast”
- What superforecasters tend to be:
- Philosophy
- Cautious: Nothing is certain
- Humble: Reality is infinitely complex
- Nondeterministic: What happens is not meant to be and does not have to happen
- Thinking styles
- Actively open-minded
- Intelligent and knowledgeable
- Reflective
- Numerate
- Methods of forecasting
- Pragmatic
- Analytic
- Dragonfly-eyed
- Probabilistic
- Thoughtful Updaters
- Good Intuitive Psychologists
- Work Ethic
- Growth Mindset
- Grit
- Philosophy
- As a leader, you have to learn how to balance boldness and decisiveness w/ thinking and forecasting. you should build a culture of criticism, and tell ppl what you want them to do and let them figure out how
- How should you combine the humility you need to be a forecaster w/ the confidence you need to be a leader? Have intellectual humility for the game, but no necessarily humility about your abilities
- Future steps for forecasting
- symbiosis between superquestioners and superforecasters
- Potential research questions that Phil Tetlock is pursuing: breaking down complex and big-picture questions into question clusters that can aggregate to provide insight into the big-picture questions
- more public accountability of forecasters
- symbiosis between superquestioners and superforecasters
- It’s easy to fall into the trap of learning too much from one’s success aka Against Learning from Dramatic Events
- 10 Commandments
- Triage (focus on the right problems)
- Break intractable problems into tractable sub-problems
- Strike the right balance between inside and outside views
- Strike the right balance between under-reaction and over-reactions
- Look for the clashing causal forces at work
- Distinguish between as many degrees of uncertainty as necessary
- Strike the right balance between under and over confidence
- Look for the errors behind your mistakes
- Bring out the best in others
- Master deep, deliberative practice
- Don’t treat commandments as commandments Good summary: https://aiimpacts.org/evidence-on-good-forecasting-practices-from-the-good-judgment-project/