1/22/2025
Published on Convert on 22/01/2025.
“How can I understand stats well enough to trust my A/B test results?” This article is a visual guide to the basics—no stats degree required.
What you’ll see:
- Why we need stats: Results vary. We need to tell real signal from random noise (with an A/A vs real A/B example).
- Standard deviation: How spread out the numbers are—a measure of “noise.”
- Normal distribution: With enough data, conversion rates tend to follow a bell curve. We put the variant’s result on that curve in “standard deviation units” (z-score).
- Null hypothesis: Assume no difference between groups. We only reject that if the evidence is strong enough.
- Rejection regions: For a 95% confidence level, we reject the null if the result falls in the outer 5% (e.g. beyond about ±2 standard deviations). One-tailed = we only care about improvement (or only about harm).
- p-value: The chance of seeing a result this extreme if there were really no difference. If p < 0.05 (or your chosen level), we reject the null.
The article includes simple formulas (e.g. z-score, p-value in Sheets/Excel) and diagrams. Statistics don’t give “truth”—they help us manage uncertainty and decide when to act.