Activation rate
The percentage of new signups who reach a defined "first value" milestone within a target window. The single number that separates "got the user to sign up" from "got the user to actually use it."
Activation rate measures how many users actually experience the product is value, not just how many created an account. The activation moment is product-specific — for Slack, "team sent 2,000 messages." For Dropbox, "user uploaded one file from a desktop client." For Twitter, the classic answer: "user follows 30 accounts." The choice of activation event determines what your funnel is actually optimizing toward.
Activation matters more than signup conversion at most stages. A page that doubles signups but halves activation has just made the funnel worse — twice as many signups, half as many users who reach value, the absolute number of activated users is roughly unchanged but operating costs doubled. Pre-commit activation as a guardrail on every signup-flow experiment, or you will ship a long string of "wins" that do not actually grow the active user base.
When to use it
Define activation before you run any signup-flow experiments — without an activation event, the verdict is just signup theater. Use activation as a guardrail on every top-of-funnel test, and as the primary metric on every onboarding test.
What this looks like in practice
A useful activation event has three properties. It correlates strongly with long-term retention (so the metric is meaningful, not just pretty). It can be measured within days of signup (so the verdict on an experiment arrives within weeks, not quarters). And it requires effort the user would not exert if they did not see value (so it cannot be gamed by clever defaults). Picking the activation event is more important than the rate it currently sits at.
Activation rate moves in two ways: signup quality up, or onboarding effectiveness up. A test that targets cleaner audiences (better-fit ads, better landing-page filtering) lifts activation by raising the average new user is intent. A test that simplifies onboarding lifts activation by helping users in less. The two levers compound — the same product can grow activation from 20% to 40% by tightening both ends without changing any feature. Run experiments on both.
Activation also decays silently. The activation event you picked when the product was new might have stopped predicting retention as the product grew or the audience shifted. Re-validate the choice annually by comparing activated users in cohort N to retention 90 to 180 days later. If activation no longer separates retainers from churners, redefine it; running optimization against a stale activation event is more expensive than running no activation experiments at all.
A worked example
A B2B SaaS defines activation as "added a teammate AND completed one workflow within 7 days." Baseline: 22% of signups activate. A redesigned onboarding tested in an experiment lifts activation to 28% — a +27% relative improvement on the metric that actually predicts paid retention. The signup conversion barely moved; the verdict is ship anyway because activation is the leading indicator that matters.
Common mistakes
- Picking an event that is too easy.If 95% of signups hit the activation event without help, the metric is decorative — it cannot move and tells you nothing. Pick an event that 30 to 50% of users hit naturally; that is the band where experiments matter.
- Confusing activation with engagement.Activation is the first-time-value moment. Engagement is repeated use. They need separate metrics; conflating them blurs both verdicts.
- Optimizing activation at the cost of intent.A test that auto-completes the activation event with defaults will lift the metric and tank retention. The point is the user choosing to use the product, not a graph that goes up.
Related terms
Pick a hypothesis. Vocabulary done.
The fastest way to learn this vocabulary is to commit one experiment. The contract takes about five minutes to write.