DM-010·// GROWTH··28 min read

Marketing for product managers: growth loops, NSM, and the experiment portfolio

Product managers are usually under-trained on marketing and over-confident about growth. Here is the marketing layer that turns a PM into a growth PM: frameworks, role boundaries, and the artifacts hiring managers look for.

The PM career path requires fluency in marketing as a discipline, not just as a tactic library. PMs who understand growth marketing in the formal sense covered here can lead growth squads, negotiate with marketing peers, and design product roadmaps that respect the unit economics of acquisition. Most PMs cannot. This guide is the operating vocabulary that separates a feature PM from a growth PM.

Eight sections, in the order growth-oriented orgs actually work: the PM-marketing interface, growth loops, the North Star Metric, activation, the experiment portfolio, channel-product fit, the three GTM motions, and the Growth PRD: the artifact that ties everything together and the single most-asked-for sample in growth-PM hiring loops.

// 01The PM-marketing interface

Product managers and marketers report to different leaders, optimize different metrics, and often work from different definitions of the same words. Yet they own adjacent halves of the same customer journey. Without explicit collaboration structure, the interface between them becomes the source of more wasted effort than any other org boundary.

The three failure modes

  • The messaging-reality gap: Marketing builds positioning that promises capabilities the product does not yet have. The PM is the backstop on truthfulness, but is consulted too late. Fix: PMM and PM should review each other's drafts at the messaging stage, not the launch stage.
  • Roadmap pull: Marketing wants features that fit the next campaign. PM wants features that serve users long-term. Both legitimate. Without an agreed framework, the loudest voice wins.
  • Attribution disputes: Marketing claims first-touch (we generated the lead). Sales claims last-touch (we closed). Product claims activation (we delivered the value). All three are right, and that's the problem.

The Growth PM role exists specifically because this interface is too important to leave unstaffed. They hold the funnel end-to-end, partner with marketing as deeply as with engineering, and run experiments that touch both product and marketing surfaces.

// FIGURE 01 · INTERACTIVE
Who owns what across the product-to-marketing surface

Four roles, six activities, twenty-four collaboration cells. Click any cell to see how the actual handoff works.

ProductroadmapPricing&packagingOnboardingflowPositioning&copyChannelmixExperimentsProduct ManagerOCCKKCGrowth PMCCOCCOProduct MarketerKOCOCKPerformance Marketer-KKCOC
OOwner
CContributor
KConsulted
Not involved
Growth PM × ExperimentsOwner
Growth PMs are the most experiment-fluent role on the team. Running 4 to 10 concurrent experiments is normal, with structured hypothesis docs and pre-registered analysis plans.

// 02Growth loops, not just funnels

The funnel metaphor is taught first, used most, and least true. It treats acquisition as a fixed input. The best businesses do not pour new buyers in at the top. They generate new buyers from existing ones. That is a loop, not a funnel.

Outputn+1 = Outputn × k

Where k is the loop coefficient. With k = 1.10 over 24 cycles, growth is 9.85x. With k = 0.95 over the same span, the system shrinks to 29%. The asymmetry is brutal: small percentage gains in k produce exponential gains in output.

The four canonical loops

  • Viral / referral: Each user invites N more. Calendly link sharing, Loom video links, Notion public pages.
  • Content / SEO: Each piece of content attracts users who become input for future content (case studies, data, examples).
  • Paid: Revenue from acquired users funds more ad spend. Bounded by LTV/CAC ratio.
  • Sales-led: Each closed customer generates referrals and case studies. Slowest loop but highest-ACV.
Brian Balfour’s Reforge essay “Growth Loops are the New Funnels” is required reading. The five-minute version: enduring companies stack two or three loops. Slack runs viral on top of content on top of paid on top of sales. The PM job is to compose a stack, not pick a loop.
// FIGURE 02 · INTERACTIVE
Loops compound. Funnels do not.

The four canonical growth loops, classified by what reinvests: people, content, money, or relationships.

LOOPContent loop01Articlepublished02Readerarrivesviasearch03Signsup/engages04Generatescontentordata05Fuelsnextarticle
Compounding metric
Content velocity × organic traffic per piece
Examples
Glassdoor (reviews), Yelp (reviews), Zillow (listings), HubSpot blog
Each piece of content compounds organic surface area. User-generated content creates more content without the team writing it. Even without UGC, a content loop can run on insights extracted from served customers: 'we audited 100 SaaS landing pages and here is what we found.'
For a service business
The most realistic loop for the service. Each completed audit produces a case study, which becomes a teardown article, which ranks for relevant queries, which produces more audit leads. The data you gather across audits compounds into research-style posts that no individual practitioner can replicate.

// 03The North Star Metric

Every team optimizes against something. If the something is not chosen deliberately, it gets chosen by accident: revenue this quarter, MAU on the dashboard, feature usage from the most recent launch. Default metrics produce drift. The NSM is the deliberate choice.

The four properties of a good NSM

  • Reflects customer value, not just business value: Revenue is a business metric. Nights booked (Airbnb) is a customer-value metric. Songs played (Spotify) is a customer-value metric.
  • Leading indicator of revenue: If your NSM moves and revenue does not, the NSM is wrong. If it moves and revenue follows 3–9 months later, it is correct.
  • Decomposable into input metrics: The NSM at the top must break down into a tree of inputs the team can move directly.
  • Actionable and unambiguous: Every team member should be able to answer: did we move the NSM today? What would I do tomorrow to move it?

Famous NSMs

  • Airbnb: Nights booked. Two-sided marketplace tree, multiplicative.
  • Spotify: Time spent listening. MAU × sessions × minutes per session.
  • Slack: Weekly active teams of 3+. Captures the network-effect threshold.
  • Facebook: Started as MAU, evolved to "meaningful interactions" as the product matured. NSMs are not permanent.
// FIGURE 03 · INTERACTIVE
One root metric, many input metrics, one logic

The NSM is the single number that captures customer value. It must decompose into input metrics the team can actually move. Click any node.

NORTH STAR METRICAudits delivered per quarterL1 INPUTLead volumeL1 INPUTLead-to-audit rateL1 INPUTCapacity utilizationL2OrganictrafficL2OutboundrepliesL2Referral leadsL2Landing pageCVRL2Sales-callclose rateL2Hours perauditL2Active weeksper quarter
Audits delivered per quarterRoot
Formula · Lead volume × Lead-to-audit rate × Capacity utilization
The customer-facing value-creation event. Picked because it is a leading indicator of revenue, decomposable, and unambiguous (an audit either was or was not delivered).

// 04Activation and the aha moment

The most expensive failure in growth is acquiring users who never experience the product’s value. A leaky bucket cannot be filled. Plugging activation usually has 5–10x the leverage of any acquisition lift, because every acquired user retroactively becomes more valuable.

The aha moment, defined

The aha moment is the specific in-product action that statistically separates retainers from churners. Found through cohort analysis, not interviews.

  • Facebook: 7 friends in 10 days.
  • Twitter: Follow 30 accounts. Below 30, the feed is sparse.
  • Slack: 2,000 messages sent in a team, the point at which integration into workflow has happened.
  • Dropbox: 1 file uploaded on at least 1 device. Without one file, Dropbox is a logo on the desktop.
  • Pinterest: 5+ pins in the first session.
// FIGURE 04 · INTERACTIVE
The activation threshold that predicts retention

Two simulated 30-day retention cohorts (activated vs not). Drag the slider to change the activation definition and watch the gap widen or close.

25%50%75%100%D0D7D14D21D30DAYS SINCE SIGNUPRETENTIONAHA WINDOW57%0%Activated cohortNot activated
% Activated
38%
D30 ret. (act.)
57%
D30 ret. (not)
0%
Retention lift
4440762%

// 05The experiment portfolio

Experimentation should be treated like venture investing. Most experiments are flat or losses. A few drive most of the value. Run enough volume to find the wins; do not try to make every experiment a win.

We believe that [change] for [audience] will result in [outcome] because [reason]. We will know we are right when we see [metric] [direction] [threshold] over [duration].

The portfolio shape

25–35% of experiments produce a meaningful win. 15–25% produce a meaningful loss (often more valuable than wins because they reshape priors). 40–60% are flats.

Five common errors

  • Peeking: Stopping the test the moment p crosses 0.05. Generates false positives. Fix duration before launch.
  • Optimizing on a leading metric that does not drive the lagging one: Running tests against CTR when the goal is revenue. Validate the leading-to-lagging chain first.
  • Multiple comparisons without correction: Splitting into 8 segments and reporting the segment that won. Apply Bonferroni or pre-register which segments matter.
  • Local wins that hurt the system: A change that lifts signup but tanks activation. Monitor an OEC panel of metrics, not just the test metric.
  • No kill criteria: Without "if X happens, we kill it" rules, mediocre tests run forever, consuming traffic and attention.
// FIGURE 05 · INTERACTIVE
24 experiments, 9 wins, 4 losses, 9 flats, 2 killed

A real growth experiment portfolio over a year of work. Most do not move the needle, a few drive most of the value. Click any cell to inspect.

Ran
24
Win rate
50%
Net impact
+169%
Biggest win
+31%
E01 · win+24% impact
Hero headline value-prop rewrite
Hypothesis · Lead with outcome, not service category. Expected 15-25% lift in CTA clicks.
Learning · Specific outcome language ('stop wasting ad spend on weak pages') outperformed feature framing by 22%.

// 06Channel-product fit

The product and the acquisition channel must match, or growth stalls regardless of effort. The two axes that determine fit are average contract value (ACV) and self-serve potential. ACV determines the unit economics of acquisition. Self-serve potential determines whether a buyer can complete the purchase without a human in the loop.

The four canonical mismatches

  • Self-serve product, sales-led motion: A simple, low-priced product hampered by unnecessary sales overhead. CAC too high, payback too long. Slack initially had this.
  • Complex enterprise product, content-led motion: Inbound interest exists but cannot close. Sales handoff missing, deals stall.
  • Low-ACV product, paid-led motion: A $20/month tool buying its way to growth. Math sometimes works for a quarter, then breaks.
  • High-ACV product, viral-led motion: Enterprise platform expecting growth through user invites. Buying decisions are committee-led.
// FIGURE 06 · INTERACTIVE
Where the product sits determines which channels can work

X axis: average contract value, low to high. Y axis: self-serve potential. Each quadrant favors different acquisition channels.

PLG ZONEHYBRID ZONEMARKETING-LED ZONESALES-LED ZONEACV (CONTRACT VALUE) →SELF-SERVE POTENTIAL →$10/mo$5k/yr$100k+/yrrequiressalesmixedfullyself-serveCalendlyNotionFigmaLinearHubSpotMailchimpDriftSalesforceSnowflakeWorkdaySlackDatadogA service business
The service
A service business
Productized service in the marketing-led zone, but with a sales-assisted close (you are the sales rep). ACV $500-5000 range, low self-serve potential because the deliverable requires expert work. Channel mix follows: content, outreach, referrals.
Motions
PLG (product-led)
Marketing-led
Sales-led
Hybrid
The service

// 07The three GTM motions

  • Product-led (PLG): Free tier or trial. Self-serve activation. Viral or content-led acquisition. Low-ACV, low-friction. Notion, Figma, Slack, Linear.
  • Marketing-led: Demand created by content, brand, and paid. Sales lighter or absent. Mid-ACV, longer cycle. HubSpot, Intercom (early).
  • Sales-led: Outbound + inside sales + AE close. High-ACV, longer cycles, high CAC payback. Salesforce, Workday, Snowflake.

Hybrid patterns

  • PLG with sales overlay: Free tier + enterprise sales for accounts above a size threshold. Slack, Figma, Notion, Atlassian.
  • Bottom-up to top-down: PLG to acquire individual users, then sell top-down once organizational footprint exists. Datadog, Snyk, GitHub.
  • Marketing-led with PLG bottom: HubSpot. Brand and content drive demand; free CRM captures users below the sales threshold.
  • Community-led: Practitioner communities, open-source contributions, developer evangelism. Posthog, dbt, Supabase.
// FIGURE 07 · INTERACTIVE
PLG, Marketing-Led, Sales-Led, side by side

Most companies blend two or three. Toggle the comparison dimension to see how each motion handles a given variable.

Product-Led
Product is the primary acquisition vehicle
Average Contract Value
$10 to $5k
Low to mid ACV. Free tier captures the bottom, paid tiers capture power users and teams.
Examples
Slack, Figma, Notion, Calendly, Linear, Loom
Marketing-Led
Brand, content, and demand generation drive the funnel
Average Contract Value
$1k to $50k
Mid-market range. High enough to support content and brand investment, low enough to avoid heavy sales motions.
Examples
HubSpot, Mailchimp, Drift, Stripe (early days), Webflow
Sales-Led
Outbound and inbound sales close every meaningful deal
Average Contract Value
$20k to $millions
High ACV is required to fund the sales cost structure. Below $50k, sales-led economics start to break.
Examples
Salesforce, Snowflake, Workday, Palantir, ServiceNow

// 08The Growth PRD

The Growth PRD is the single most-asked-for sample in growth-PM hiring loops. It differs from a feature PRD in three ways: it explicitly states the growth-loop coefficient being targeted, it includes a hypothesis in the standard form, and it defines kill criteria alongside success criteria.

GROWTH PRD: [Initiative name]

Loop targeted:        [viral | content | paid | sales]
Coefficient today:    k = 0.92
Coefficient target:   k = 1.10 within 90 days

Hypothesis:           "We believe that [change] for [audience]
                       will result in [outcome] because [reason]."

Success criteria:     [metric] [direction] [threshold] over [duration]
Kill criteria:        if [metric] is below [threshold] at day [N], stop

Instrumentation:      events to fire, dashboards to add, attribution model
Risks / dependencies: what could break, what we need from other teams
Post-launch review:   day [N], owner: [name]
Every Growth PRD ends with a written hypothesis and a written kill criterion. That single discipline separates an experiment portfolio that compounds from one that accumulates sunk cost.

// 09Eight things to carry forward

  • 01: The PM-marketing interface is the single most expensive org boundary. Shared metrics + Growth PMs are the cure.
  • 02: Growth loops compound; funnels do not. Stack two or three loops, do not pick one.
  • 03: The NSM is a deliberate choice. Customer-value metric, decomposable, leading indicator of revenue.
  • 04: Activation has 5–10x leverage over acquisition. Find the aha moment through cohort analysis, not interviews.
  • 05: Experimentation is venture investing. Most tests flat. A few drive value. Run volume.
  • 06: Channel-product fit determines whether growth is possible. Mismatches cannot be fixed by effort.
  • 07: Three GTM motions: PLG, marketing-led, sales-led. Most modern companies hybridize.
  • 08: Every Growth PRD ends with a written hypothesis and a written kill criterion. That discipline separates the compounders from the rest.
// PUT IT TO WORK

You can run an experiment from this article in under five minutes.

Pick the strongest claim above. Pre-fill it as a real experiment in Xi — hypothesis, metric, success and kill thresholds — and you’ll have evidence by next month, not opinion.

Run an experiment