It was easiest to beat chance on the shortest-range questions that only required looking one year out, and accuracy fell off the further out experts tried to forcast–approaching the dart-throwing chimpanzee level three to five years out. p5
Elsewhere, protests swelled into rebellions, rebellions into civil wars. This was the Arab Spring–and it started with one poor man, no different from countless others, being harassed by police, as so many have been, before and since, with no apparent ripple effects. p7
“Predicatability: Does the Flap of a Butterfly’s Wings in Brazil Set Off a Tornado in Texas?” A decade earlier, Lorenz had discovered by accident that a tiny data entry variation in computer simulations of weather patterns–like replaceing 0.506127 with 0.506–could produce dramatically different long-term forecasts. It was an insight the would inspire “chaos theory”: in nonlinear systems like the atmosphere, even small changes in initial conditions can mushroom to enormous proportions. p8
Laplace called his imaginary entity a “demon.” If it knew everything about the present, Laplace thought , it could predict everything about the future. It would be omniscient. Lorenz poured called raintwater on that dream. If the clock symbolizes perfect Laplacean predictability, its opposite is the Lorenzian cloud. p9
it’s misguided to think anyone can see very far into the future p10
How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances. p13
Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: the consumers of forecasting–goverments, business, and the public–don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement. p14
“I have been struck by how important measurement is to improving the human condition,” Bills Gates wrote. “You can achieve incredible progress if you set a clear goal and find a measure that will drive progress toward that goal…” p15
One, foresight is real. p18
The other conclusion is what makes these superforecasters so good. It’s not really who they are. It is what they do. Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughful, determined person. p18
…our analyses have consistenly found commitment to self-improvement to be the strongest predictor of performance. p20
The point is now indisputable: when you have a well-validated statistical algorithm, use it. p21
Illusions of Knowledge
That’s the problem. Cochrane didn’t doubt the specialist and the specialist didnt doubt his own judgement and so neither man considered the possibility that the diagnosis was wrong and neither thought it wise to wait for the pathologist’s report before closing the books on the live of Archie Cochrane. p25
Randomly assigning people to one group or the other would mean whatever differences there are among them should balance out if enough people participated in the experiment. Then we can confidently conclude that the treatment caused any differences in observed outcomes. It isn’t perfect. There is no perfection in our messy world. p29
The idea of randomized controlled trials was painfully slow to catch on and it was only after World War 2 that the first serious trials were attempted. p30
In describing how we think and decide, modern psychologists often deploy a dual-system model that paritions out mental universe into two domains. System 2 is the familiar realm of conscious thought. It consists of everything we choose to focus on. By contrast, System 1 is largely a stranger to us. It is the realm of automatic perceptual and cognitive operation. p33
the Cognitive Reflection Test, which has shown that most people–including very smart people–aren’t very reflective. p34
System 1 is designed to jump to conclusions from little evidence. p35
within System 1–making it automatic, fast, and complete within a few tenths of a second. You see the shadow. Snap! You are frightened–and running. That’s the “availability heuristic,” one of many System 1 operations–or heuristics– discovered by Daniel Kahneman. p35
In fact, in science, the best evidence that a hypothesis is true is often an experience designed to prove the hypothesis is false, but which fails to do so. p 38
our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability. This is what psychologists call confirmation bias. p39
But as we see every time someone spot the Virgin Mary in burnt toast or in the mold on a church wall, our pattern recognition ability comes at the cost of susceptibility to falsw positives. This, plus the many other ways in which the tip-of-your-nose perspective van generate perceptions that are clear, compelling, and wrong, means intuition can fail as spectacularly as it can work. p43
All too often, forecasting in the twenty-first century looks too much nineteenth century medicine. There are theories, assertions, and arguments. There are famous figures, as confident as they are well compensated. But there is little experimentation, or anything that could be called science, so we know much less than most people realize. p45
Keeping Score
With no time frame, there is no way to resolve these arguments to everyone’s satisfaction p52
Similarly, forecasts often rely on implicit understanding of key terms rather than explicit definitions. p52
As Kent wrote, “estimating is what you do when you do not know.” And as Kent emphasized over and over, we never truly know what will happen next. Hence forecasting is all about estimating the likelihood of something happening… p54
…vague thoughts are easily expressed with vague language but when forecasters are forced to translate terms like “serious possibility” into numbers, they have to think carefully about how they are thinking, a process known as metacognition. p57
Forecasters who use “a fair chance” and “a serious possibility” can even make the wrong-side-of-maybe fallacy work for them… p58
With perverse incentives like these, it’s no wonder people prefer rubbery words over firm numbers. p58
The math behind this system was developed by Glenn W. Brier in 1950, hence the results are called Brier scores. p64
The critical factor was how they thought. p68
The other group consisted of more pragmatic experts who drew on manby analytical tools, with the choice of tool hinging on the particular problem they faced. These experts gathered as much information from as many sources as they could. When thinking, they often shifted mental gears, sprinkling their speech with transition markers such as “however,” “but,” “although” and “on the other hand.” They talked about possibilities and probabilities, not certainties. And while no one likes to say “I was wrong,” these experts more readily admitted it and changed their minds. p69
The fox knows many things but the hedgehog knows one big thing. p69
But the hedgehog also “knows one big thing,” the Big Idea he uses over and over when trying to figure out what will happen next. Think of that Big Idea like a pair of glasses that the hedgehog never takes off. The hedgehog sees everything through those glasses. And they aren’t ordinary glasses…wearing green-tinted glasses may sometimes be helpful, in that they accentuate something real that might otherwise be overlooked…but far more often green-tinted glasses distort reality…so the hedgehog’s one Big Idea doesn’t improve foresight. It distorts it. And more information doesn’t help because it’s all seen through the same tinted glasses. It may increase the hedgehog’s confidence, but not his accuracy. p71
Aggregating the judgement of many consistently beats the accuracy of the average member of the group. p73
…aggregating the judgements of an equal number of people who know lots about lots of different things is the most effective because the collective pool of information becomes much bigger. p74
What I should have done is look at the problem from both perspectives–the perspective of both logic and psycho-logic–and combine wgat I saw. p 76
Foxes aggregate perspectives. p77
My fox/hedgehog model is not a dichotomy, It is a spectrum. p79
“All models are wrong,” the statistician George Box observed, “but some are useful.” The fox/hedgehog model is the starting point, not the end. p80
Superforecasters
To have accountability for process but not accuracy is like ensuring that physicians was their hands, examine the patient, and consider all symptoms, but never checking to see whether the treatment works. p 87
Quit pretending you know thinks you don’t and start running experiments. Give the training to one randomly chosen group of forecasters but not another. Kepp all else constant. Compare results. If the trainees become more accurate, while the untrained don’t, the training is working. p89
But as Mausboussin notes, there is an elegant rule of thumb that applies to athletes and CEOs, stock analysts and superforecasters. It involves “regression to the mean.” p99
So regression to the mean is an indispensable tool for testing the role of luck in performance: Mausboussin notes that slow regression is more often seen in activites dominated by skill, while faster regression is more associared with chance. p101
Supersmart?
High-powered pattern recognition skills won’t get you far, though, if you don’t know where to look for patterns in the real world. So we measured cystallized intelligence–knowledge… p108
The first thing they would do is find out what percentage of American households own a pet. Statisticians call that the base rate–how common something is within a broader class. Daniel Kahneman has a much more evocative visual term for it. He calls it the “outside view”–in contrast to the “inside view,” which is the specifics of a particular case. p118
Coming up with an outside view, an inside view, and a synthesis of the two isn’t the end. It’s a good beginning. Superforecasters constantly look for other views they can synthesize into their own. p123
It’s not the raw crunching power you have that matters most. It’s what you do with it. p126
Active open-mindedness (AOM) is a term coined by the psycholigist Jonathan Baron… Baron’s test for AOM asks whether you agree or disagree with the following statements:
People should take into consideration evidence that goes against there beliefes
It is more important to pay attention to those who disagree with you than pay attention to those who agree with you
Changing your mind is a sign of weakness
Intuition is the best guide in making decisions
It is important to persevere in your beliefs even when evidence is brought to bear against them.
Superquants?
Before that, people had no choice but to rely on the tip-of-your-nose perspective. You see a shadow moving in the long grass. Should you worry about lions? You try to think of an example of a lion attacking from the long grass. If the example comes to mind easily, run! As we saw in chapter 2, that’s your System 1 at work. If the response is strong enough, it can produce a binary conclusion: “Yes, it’s a lion,” or “No, it’s not a lion.” But if it’s weaker, it can produce an unsettling middle possibility: “Maybe it’s a lion.” What the tip-of-your-nose perspective will not deliver is a judgement so fine grained that it can distinguish between, say, a 60% chance that it is a lion and an 80% chance. That takes slow, conscious, careful thought. Of course, when you are dealing with the pressing existential problems our ancestors faced, it was rarely necessary to make such fine distinctions. It may bot even have been desirable. A three-setting dial gives quick, clear directions. Is that a lion? YES = run! MAYBE = stay alert! NO = relax. The ability to distinguish between a 60% probability and an 80% probability would add little. In fact, a more fine-grained analysis could slow you down–and get you killed. p137
Why is a decline from 5% to 0% so much more valuable than a decline from 10% to 5%? Because it delivers more than a 5% reduction in risk. It delivers certainty. Both 0% and 100% weigh far more heavily on our minds. p138
“there is no such thing as failure. Failure is just life trying to move us in another direction…Learn from every mistake because experience, encounter, and particularly your mistakes are there to teach you and force you into being who you are.” p148
Supernewsjunkies?
Superforecasters update much more frequently, on average than regular forecasters. p154
“When the facts change, I change my mind,” the legendary British econonmist John Maynard Keynes declared.
So there are two dangers a forecaster faces after making the initial call. One is not giving enough weight to new information. That’s underreaction. The other danger is overreacting to new information, seeing it as more meaningful than it is, and adjusting a forecast too radically. p158
But when Jean-Pierre makes a forecast in his specialty, that block is lower in the structure, sitting next to a block of self-perception, near the tower’s core. So it’s a lot harder to pull that block out without upsetting the other blocks. p162
So how do they do it? In the ninteenth century, when prose was never complete without a sage aside to Greek mythology, any discussion of two opposing dangers called for Scylla and Charybdis. Scylla wasa rock shoal off the coast of Italy. Charybdis was a whirlpool in the coast of Sicily, not far away. Sailors knew they would be doomed if they strayed too far in either direction. Forecasters should feel the same about under- and overreaction to new information, the Scylla and Charybdis of forecasting. Good updating is all about finding the middle passage. p166
But I haven’t yet mentions the magnitude of his constant course corrections. In almost every case they are small. And that makes a big difference. p167
In simple terms, [[Bayes Theorem]] says that your new belief should depend on two things–your prior belief (and all the knowledge that informed it) multiplied by the “diagnostic value” of the new information. p170
Perpetual Beta
Many people have what she calls a “fixed mindset”–the belief that we are who we are, and abilities can only be revealed, not created and developed. p175
For [[John Maynard Keynes]], failure was an opportunity to learn–to identify mistakes, spot new alternatives, and try again. p177
The one consistent belief of the “consistently incorrect” [[John Maynard Keynes]] was that he could do better. Failure did not mean he had reached the limits of his ability. It meant he had to think hard and give it another go. Try, fail, adjust, try ahain [[John Maynard Keynes]] cycled through those steps ceaselessly. p178
The lesson for forecasters who would judge their own vague forecasts is: don’t kid yourself. p182
The second big barrier to feedback is time lag. p182
…you are likely to be afflicted by what psychologists call hindsight bias. p182
One we know the outcome of something, that knowledge skews our perception of what we thought before we knoe the outcome: that’s hindsight bias. p184
Grit is passionate perserverance of long-term goals, even in the face of frustration and failure. Married with a growth mindset, it is a potent force for personal progress. p188
The strongest predictor of risiting into the ranks of sperforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.
Superteams
“members of any small cohesive group tend to maintain esprit de corps by unconsciously developing a number of shared illusions and related norms that interfere with critical thinking and reality testing.” p196
Be coopertative but not deferential. Consensus is not always good; disagreement not always bad. If you do happen to agree, don’t take that agreement–in itself–as proof that you are right. Never stop doubting. p199
Practice “constructive confrontation” to use the phrase of [[Andy Grove]]. Precision questioning is one way to do that. p199
How did superteams do so well? By avoiding the extremes of groupthink and Internet flame wars. And by fostering minicultures that encouraged people to challenge each other respectfully, admin ignorance, and request help. p 207
…the aggregation of different perspectives is a potent way to improve judgement, but the key word is different. p 209
The Leader’s Dilemma
The statement was refined and repeated over the decades and today soldiers know it as “no plan survives contact with the enemy.” p 214
The fundamental message: think. p215
“Clarifications of the enemy situation is an obvious necessity, but waiting for information in a tense situation is seldom the sign of strong leadership–more often of weakness.” p 216
The Wehrmacht also drew a sharp line between deliberation and implementation: once a decision has been made, the mindset changes. Forget uncertainty and complexity. Act! p216
Aufragstaktik blended strategic coherence and decentralized decision making wiht a simple principle: commanders were to tell subordinates what their goal is but not how to achieve it. p217
In fact, ex-military officers advising corporations often find themselves telling executives to worry less about status and more about empowering their people and teams to choose the best ways to achieve shared goals. p226
Are They Really So Super?
Not even knowing it’s an illusion can switch off the illusion. The cognitive illusions that the tip-of-your-nice perspective sometimes generates are similarly impossible to stop. p233
…to resist a bias of particularly deep relevance to forecasting: scopt insensitivity. p234
It’s classic bait and switch. Instead of answering the question asked–a difficult one that requires putting a money value on thingks we never monetize–people answered “How bas does this make me feel?” Whether the question is about 2,000 or 200,000 dying ducks, the answer is roughly the same: bad. Scope recedes into the background–and out of sight, out of mind. p235
The “black swan” is therefore a brilliant metaphor for an event so far outside experience we can’t even imagine it until it happens. p238
History does sometimes jump. But it also crawls, and slow, incremental change can be profoundly important. p240
“All of which is to say that I’m not sure what 2010 will look like,” concluded its author Linton Wells, “but I’m sure that it will be very little like what we expect, so we should plan accordingly.” p242
[[Daniel Kahneman]] and other pioneers of modern psychology have revealed that our minds crave certainty and when they don’t find it, they impose it. In forecasting, hindsight bias is the cardinal sin. p245
The true distribution of wealth is a fat-tailed one that permits much more extreme outcomes. p246
What’s Next?
[[Vladimir Lenin]]’s shorthand for “Who does what to whom?”