Explores the fascinating world of prediction and decision-making, revealing the secrets of those who excel at forecasting future events. The book hallenges conventional wisdom about expert opinions and introduces readers to the "superforecasters" - ordinary people who consistently outperform even professional intelligence analysts in predicting global events. It offers practical strategies and cognitive tools to improve one's own forecasting abilities, from breaking down complex problems to embracing uncertainty and constantly updating beliefs based on new information. Ultimately, "Superforecasting" provides a thought-provoking and actionable framework for making better judgments in an increasingly unpredictable world.

"Superforecasting: The Art and Science of Prediction" by Philip E. Tetlock and Dan Gardner is an excellent book about the science of forecasting. It explains how to evaluate predictions and how to identify and nurture individuals who are particularly good at it, known as superforecasters. Since predictions form the basis for many decisions in public and private life, the relevance of this book cannot be overstated: as a society, we urgently need to work on getting better in this area.
The longer the prediction period, the more difficult and inaccurate predictions become. One reason for this is found in chaos theory. The world is incredibly complex, and small changes can have large effects, as shown by the classic example of the "butterfly effect." This states that the flap of a butterfly's wings can influence a later state so that a tornado occurs somewhere, which would not have happened without this minimal change in the initial state. The book also points out that with a large number of predictions, some will come true purely by chance. However, those who succeed are unlikely to be able to consistently repeat their success. And yet, a single successful bold prediction can unjustly elevate someone to expert status, despite a preceding history of inaccurate forecasts.
In a time flooded with predictions and forecasts, this book impressively shows that most predictions are fundamentally flawed. This affects almost all experts who make regular forecasts, from politicians and economists to consultants and scientists. The big problem is that their predictions are never systematically analyzed and evaluated. Meteorologists are one of the few exceptions – their weather forecasts are continuously systematically analyzed and improved.
This is similar to problems that other scientific fields had earlier. For example, the now-prevalent randomized controlled trials in medicine only became popular after World War II. In these trials, test subjects are divided into two groups, one receiving the drug and the other a placebo, to identify whether the drug really works. Until this point, the effectiveness of a treatment was primarily assessed with anecdotes, case reports, and expert opinions, which often led to drugs with no effectiveness or long-unrecognized side effects.
The authors describe the main problems with predictions, such as ideologically entrenched experts who try to press every problem into their preferred scheme and regularly succumb to confirmation bias. The book also highlights the often-overlooked influence of hidden intentions behind predictions, such as when they are used to support political views or for commercial interests. Moreover, media prefer bold predictions with an absolute claim to truth over nuanced and probability-based assessments. And last but not least, most predictions use unspecific language, which allows them to be easily reinterpreted afterward to fit the outcome.
Predictions can only be measured if they specify exactly what will happen, have a clear timeframe, and work with quantifiable probabilities, not vague terms like "likely" or "possibly."
These predictions can then be measured using methods like the Brier score. This measures two things: one is the calibration, i.e., the alignment of predictions with actual outcomes. For example, if it is claimed that an event will occur with a 70% probability, this should be accurate about 70% of the time over several similar predictions. The other is the resolution, i.e., how certain the prediction is. Predictions with high probabilities like 90% are weighted more heavily than those in the middle, just under or over 50%. A more complex variant of the Brier score can also be used to measure the accuracy of predictions that are regularly adjusted to new information, which would be the ideal case – the world changes, and the same should happen with predictions.
The very sobering realization from the book is that most experts' predictions are barely better than random chance – comparable to a chimpanzee randomly selecting a prediction. This underscores the importance of refining our prediction methods, as most political and economic decisions are based on these expectations about the future.
After this introduction to the science of predictions, the book focuses on how good predictions are made. The authors base their analyses on individuals who can consistently make better predictions than the rest of the population. These superforecasters are characterized by a mix of analytical ability, cognitive flexibility, humility, self-confidence, and an unwavering commitment to precision and clarity. They have a growth mindset and constantly refine their skills by being open to feedback and constantly identifying errors in their judgment. The good news from the authors is that these are all learnable skills; one is not born a superforecaster. This is good news – I will definitely try to become better in this area personally, as this skill can be applied in most areas of life.
Primarily, strong analytical skills are crucial. This includes the ability to break down problems into manageable components and combine these solutions into an overall solution (known as Fermi estimation after Nobel laureate Enrico Fermi). This can be further improved by consistently including different perspectives. Another skill is the use of base rates, i.e., the abstracted form probability of the problem, to anchor predictions on an evidence-based starting point. For example, if wondering about the chance that a specific family has a pet, one should initially orient on the percentage of the total population that has a pet. This estimate is then refined further by incorporating additional specific facts about the case. It's important to be able to distinguish between essential and irrelevant information to avoid excessive corrections.
Just as important is recognizing the inherent uncertainty in predictions and maintaining cognitive flexibility. This means questioning personal beliefs and assumptions, anticipating potential pitfalls in the solution (known as "pre-mortem analyses"), and being aware of the possibility of extremely unlikely but highly influential events. These events are referred to based on the famous book by Nassim Taleb as "black swans."
Moreover, every superforecaster must seriously consider opposing viewpoints and be vigilant against personal and cognitive biases, always striving to identify and neutralize them. Feedback and evaluations after a prediction help improve future predictions. Finally, managing emotional and identity-related attachments is crucial. Superforecasters must untangle opinions linked to their own identity and monitor emotional investments to ensure objective predictions.
Forecasts benefit significantly when multiple people tackle the problem. Even using the average of several people's predictions can significantly improve quality. Even better are teams that share diverse perspectives, information, and opinions. However, this advantage disappears when groupthink prevails, which occurs when the group ignores dissenting minority opinions and instead reinforces each other's preconceived opinions. This can be combated by promoting criticism and other opinions within the group. An interesting example is the Prussian army, which, despite its reputation for strict discipline, had a deeply rooted culture of questioning orders by subordinates during decision-making. Strict obedience was only enforced once those decisions were being executed.
I highly recommend "Superforecasters" as it very understandably presents the dramatically ignored problem of poor, unreliable predictions as the basis for most decisions in politics and business and proposes actionable solutions. The authors build their argument very carefully, which may seem a bit too lengthy to some readers. The author repeatedly refers to "The Good Judgment Project", where all the principles mentioned in the book are applied. It is also positive that the author always mentions alternative solutions, such as "Prediction markets," where people can bet on events and thus have an incentive to make the best predictions. By aggregating many different opinions, these and other known alternatives, however, perform measurably worse than identifying and developing superforecasters, according to the authors.
For those interested in delving deeper into theoretical foundations, the books "Thinking, Fast and Slow" by Daniel Kahneman and "The Black Swan" by Nassim Taleb are recommended.
