Minimum detectable effect (MDE)
The smallest change in a metric that an experiment can reliably detect, given your sample size, baseline rate, and statistical confidence. A constraint to set before launch — not a result that drops out after.
MDE answers a precise question: "If I run this experiment, how big does the effect have to be for me to actually see it in the data?" It depends on three inputs you control before launch — the baseline conversion rate, the sample size per variant, and the statistical power and significance you require. Plug those into a sample-size calculator and the MDE drops out.
For founder-scale traffic, MDE is usually the punchline of a stats conversation. With a 5% baseline conversion rate and 1,000 visitors per variant, the smallest lift you can reliably detect is around 30 to 40% relative — anything smaller will look like noise. That is why most founder A/B tests never reach significance. MDE makes the impossibility legible up front, before you waste a quarter on a test that mathematically cannot conclude.
When to use it
Calculate MDE before launching any A/B or split test, especially at founder traffic levels. If the MDE comes out bigger than any lift the change could plausibly deliver, redesign the experiment before launch — the verdict is already pre-decided.
What this looks like in practice
MDE is a function of four numbers: baseline conversion rate, sample size per variant, statistical power (conventionally 80%), and significance threshold (conventionally 95%). Move any of those and MDE moves. Higher baseline → smaller MDE. More traffic → smaller MDE. Lower power requirement → smaller MDE. Founders almost always have the wrong combo — low traffic, low baseline, high confidence — which produces an MDE so large the test cannot conclude.
The mistake most founders make is treating MDE as a number that comes out of the test, not a number that goes into the design. By the time you have run a test and gotten "inconclusive" results, MDE has already silently failed you. Pre-compute it. If a 5% relative lift would change your business but your MDE is 50%, the test was a coin flip from day one — design something else.
There are three ways to make MDE smaller without buying more traffic. First, increase the baseline rate: test higher-funnel changes where rates are 10% or more, not 0.5% sale-page conversion. Second, accept lower confidence: 90% or even 85% is fine for low-risk, reversible changes. Third, run multivariate tests with shared traffic. Each tradeoff is a real engineering choice you make in the contract before the test launches.
A worked example
At a 4% baseline conversion rate, 1,000 visitors per variant, 80% power, and 95% confidence, the MDE is roughly 50% relative — meaning the new variant has to convert at 6% or higher for the test to call it a real win. If the change you are testing might plausibly deliver 10%, the test is theatre.
Common mistakes
- Picking MDE after the test.MDE is a constraint, not a result. Set it in the contract before the test runs, or the verdict is post-hoc and the rigor was never there.
- Assuming MDE in the tool applies to your traffic.A 5% MDE looks fine until you check it requires 50,000 visitors per variant. Plug in your actual baseline and traffic numbers, not the defaults.
- Running anyway when MDE is bigger than any plausible lift.If the smallest detectable effect is 40% and your change might deliver 10%, the test is theatre. Pick a different change, a different surface, or a different design.
Related terms
Pick a hypothesis. Vocabulary done.
The fastest way to learn this vocabulary is to commit one experiment. The contract takes about five minutes to write.