This is another article about faulty analysis. On the dangers of making decisions based on incomplete data. Thankfully the decisions we make don’t have the same stakes.
Let me start with a puzzle for you…
When British army began to issue helmets to frontline soldiers in the First World War. Initial reports alarmed and puzzled many senior officers. Casualties with head wounds had in fact increased.
Why was this?
Think about this for a minute before you read on.
Helmets must make soldiers more reckless
This was what many officers believed.
- Soldiers now thought they could stick their heads above the parapet without fear.
- They were likely also no longer taking suitable precautions against incoming artillery.
- They gave them a false sense of security.
Helmets must go
This seems like an obvious conclusion. The fix would naturally be to advocate for immediate withdrawal of the helmets, right?
Case closed! Or is it…
With hindsight this should be obvious. But without the complete casualty data as context, the previous conclusion remains compelling.
Fatalities had also decreased
Many injuries that would have proven fatal before were now survivable head wounds.
Metrics without context can be misleading
By looking at a single metric without context, a big mistake was narrowly avoided. By trying to tell a story with a single data point a compelling but false conclusion was nearly reached.
If you start with a biased hypothesis its easy to confirm it
Existing fears and prejudices make it easy to fall into the trap of confirmation bias. To hone in on evidence that fits your existing view or perspective. To disregard anything that doesn’t.
So how might we avoid this?
If you are crunching the numbers, it’s usually because you have a hunch. A hypothesis you want to prove. So start by trying to do just that. Then make sure you (or someone else) tries to look for the numbers that say the opposite.