
We like to think we’re rational, but when it comes to estimating probabilities, we’re all terrible. And not in a “haha, I guessed wrong” way, but in a systematic, predictable, and completely avoidable way. A study on probability estimation by non-experts breaks down why we get it wrong and what we can do to fix it.
The Problem: Overconfidence & Context-Blind Biases
Across multiple real-world scenarios—medical tests, breathalyzers, enemy detection—people consistently overestimated probabilities. If a mammogram suggested a 10% chance of cancer, people guessed 43%. Some even thought it was 100% (which is translates to, “Yep, you dead”).
Why does this happen? Turns out, there’s no single culprit. It’s not just base rate neglect (the classic mistake of ignoring how rare something is). It’s a mess of multiple biases interacting:
- Binary thinking: A positive test result? Must mean the thing is definitely true.
- Ignoring false alarms: People forget that a lot of tests spit out false positives.
- Context neglect: The same mistakes show up in different fields—law, medicine, finance, military—but we don’t realize it.
What This Means for Decision-Making
You might be thinking, “Fine, people are bad at stats. What’s new?” But this has real consequences in areas where probability estimation is everything: business, trading, investing, hiring, and risk management.
The base rate problem isn’t just about ignoring statistics—it’s about failing to integrate them properly with new information.
Take startups. 95% fail. If you see one that looks promising and assume it has an 80% chance of making it, you’re not just being optimistic—you’re skipping a step. The right approach is to start with the base rate (5% success) and then adjust based on real evidence. Does this startup have something that measurably separates it from the rest? Market position, traction, a proven team? If not, feeling like it’s different doesn’t change the odds.
Same with stocks. If a pattern has only worked 20% of the time historically, but this time feels different, why? Unless you have something concrete—new information that actually shifts the probability—you’re just overweighting short-term signals and ignoring the bigger picture.
Hiring works the same way. If a test has a 40% false positive rate, then a candidate who scores high isn’t automatically exceptional. The test alone isn’t enough. You need more—interviews, track record, real-world performance—before making a call.
Most mistakes happen when people either ignore the base rate completely or let new information override it without a real reason. Getting it right isn’t about picking one over the other—it’s about knowing how to balance both.
How to Get Better at Probabilistic Thinking
- Think in conditional probabilities: Instead of “What’s the chance X is true?” ask, “Given the evidence, how much does that change the likelihood?”
- Use base rates: Always anchor to historical data before letting a new signal sway you. But remember historical extremes can also be broken.
- Force yourself to consider false alarms: Before trusting a test result or a business forecast, ask, “How often does this kind of signal lead to the wrong call?”
- Simulate worst-case scenarios: Assume your probability estimate is wrong. What happens then?
We’re wired to misjudge probabilities, but we don’t have to stay that way. By applying a probabilistic mindset, businesses, traders, and decision-makers can avoid costly mistakes and make smarter bets. The world isn’t black-and-white. Neither should your thinking be.