Illusion of Validity

Type: Judgment β€” Confidence Also Known As: Confidence-accuracy gap, overprecision


Definition

Believing that our predictions, judgments, and forecasts are more accurate than they actually are. We feel confident because the evidence seems compelling β€” even when that confidence is unwarranted. The gap between what we believe and reality is where costly mistakes happen.

β€œI’m 90% confident in this forecast.” (Actual accuracy: 60%)


Form

  1. Evidence is gathered and analyzed
  2. A judgment or prediction is formed
  3. High confidence is assigned based on the evidence
  4. The confidence exceeds actual predictive accuracy
  5. The gap between confidence and reality goes unnoticed

Examples

Example 1: Expert Forecasting

Analysts predict stock prices with high confidence based on company fundamentals and market trends. When tracked, their predictions are barely better than chance β€” but the confidence remains high.

Problem: The feeling of understanding exceeds actual predictive power.

Example 2: Medical Diagnosis

A doctor is highly confident in a diagnosis based on symptoms that seem to fit perfectly. The actual diagnosis is something completely different that was missed.

Problem: Pattern-matching creates false certainty.

Example 3: Hiring Decisions

Interviewers feel confident they can assess candidate quality from a 30-minute interview. Research shows structured interviews have low predictive validity, but the illusion persists.

Problem: Social interaction feels diagnostic but isn’t.

Example 4: Coin Flip Calibration

When people predict coin flips and rate their confidence, well-calibrated people should say 50% confidence. Most say 60-70%, demonstrating the illusion β€” they feel they can predict randomness.

Problem: The feeling of knowledge exceeds actual knowledge.


Why It Happens

  • Coherent stories feel true
  • Confirmation bias supports initial assessments
  • Similarity-based prediction feels reliable
  • The evidence we see feels like all relevant evidence
  • We don’t get feedback on wrong predictions often enough

How to Counter

  1. Calibration training: Practice with feedback on confidence levels
  2. Consider the opposite: What would prove this prediction wrong?
  3. Reference class forecasting: How did similar predictions turn out?
  4. Wider intervals: Deliberately expand confidence ranges
  5. Track predictions: Keep a record and score yourself

The Confidence-Accuracy Gap

Research consistently shows:

  • People are overconfident in their judgments
  • The gap is larger for difficult tasks
  • Experts are often as overconfident as novices
  • The gap persists even with feedback

Well-calibrated people are rare and valuable.



References

  • Kahneman, D. & Tversky, A. (1973). On the psychology of prediction
  • Einhorn, H.J. & Hogarth, R.M. (1978). Confidence in judgment
  • Lichtenstein, S. et al. (1982). Calibration of probabilities

Part of the Convergence Protocol β€” Clear thinking for complex times.