How to determine what metrics you need for an A/B test

Published in UX Collective (uxdesign.cc).

A “failed” experiment is one where we don’t learn anything actionable—“it didn’t work” isn’t enough to decide the next step. To learn, we need the right data in place before the test runs. The article describes a simple, team-friendly process (designers and product owners can join in):

  1. List possible outcomes — e.g. conversion win, conversion loss, no impact (or more nuanced, e.g. conversion up but AOV down).
  2. Brainstorm reasons — For each outcome, fill in: “I think this could happen because …”. Include negative and behavioural side effects. Use research or feedback if you have it.
  3. Choose metrics — For each reason/behaviour, pick metrics that can measure it. Table columns: metric name, available in self-serve dashboard?, available for analysts?, priority. Aim for as much self-serve as possible and balance thoroughness with build cost.

The idea: predict outcomes, then work backwards to the metrics we need. Analysts should be involved where possible; metrics must be comparable across test and control. The article includes example tables and stresses that we should never assume the right metrics already exist.

Outcomes and reasons table

Read the full article on UX Collective →

Iqbal Ali

Iqbal Ali

Fractional AI Advisor and Experimentation Lead. Training, development, workshops, and fractional team member.