Nineteen channels exist. One or two will run your business. The job of a GTM engine is to figure out which two, in what sequence, and how to know when one stops working. This guide is the prioritization framework: the channel universe, ICE scoring, the Bullseye rings, channel experiments with kill criteria, and the channel-level unit economics that decide what scales.
Most companies pick channels based on what worked at their last job, or what is in vogue. Both are bad heuristics. A founder who finishes a channel course feels competent at SEO, email, and LinkedIn, and attempts all three at once. Six months later: half a blog, an email list of 40, and a posting habit that lapsed in week eight. None compounded, because none received the focused attention a channel needs.
// 01The channel universe
The first mistake in GTM planning is assuming the menu is short. Founders consider 3 or 4 channels they already know, pick the easiest, and call that a strategy. The authors of Traction catalogued 19 distinct acquisition channels: paid search, SEO, content, email, social, partnerships, affiliate, business development, sales, viral, engineering as marketing, unconventional PR, public relations, trade shows, offline events, speaking, community building, existing platforms, target market blogs.
The 19-channel universe from Weinberg & Mares' Traction. Each dot is one channel, sized by fit for a service business. Click to inspect; most channels will be wrong for you, and that is the point.
// 02ICE prioritization
- Impact: If this channel worked well, how much would it move the needle?
- Confidence: How sure are you this channel will actually work for your specific context? Based on evidence, analogies, past attempts.
- Ease: How much effort, time, and cash does it take to run a meaningful test?
ICE score = Impact × Confidence × Ease
Use a 1-to-10 scale. Multiplicative punishes a zero on any axis (a channel nobody believes in or that is impossible to test rarely becomes the winner, even with huge theoretical impact).
Each bubble is a candidate channel. X axis is Confidence (how sure you are it will work). Y axis is Impact (ceiling if it works). Bubble size is Ease. Click to inspect.
Common scoring mistakes
- Overweighting Impact: A channel that scores 9 on Impact and 2 on everything else has an ICE product of 36, near the bottom. Take Confidence and Ease as seriously as Impact.
- Confusing Confidence with optimism: Confidence is "what evidence do I have." If you cannot name one specific piece (a competitor doing it, a data study, a past attempt), confidence is 3 or lower, not 7.
- Ignoring opportunity cost: Ease is relative. A channel taking 10 hours/week is easy if your others take 20, hard if your others take 3.
// 03The Bullseye framework
ICE produces a ranked list. The Bullseye determines what you do with it.
- Outer ring (possible): All plausible channels: your full ICE-ranked list. Intentionally inclusive.
- Middle ring (promising): 3 to 5 channels in active testing. Small, time-boxed experiments running in parallel with explicit success and kill criteria.
- Inner ring (core): 1 to 2 channels that have earned disproportionate investment. Compounding engines. Most time, budget, attention.
Channels start in the outer ring, graduate to the middle as they show signal, and land in the inner ring once they are proven. Toggle between the three phases.
// 04Running channel experiments
Every channel experiment needs five things specified before you start:
- Hypothesis: "A deep technical blog post on the problem the OSS solves will generate at least 5 sign-ups for the hosted tier within 60 days."
- Smallest test: Cheapest version that produces decision-quality evidence. You do not need a 12-post content series; you need 3 deeply technical posts that match how engineers already search for the pain.
- Time box: Fixed duration. 30 / 60 / 90 days depending on the channel. SEO and content need longer because of payback lag. Paid and outreach can be judged in 30.
- Success criteria: "Generates at least X hosted-tier sign-ups at less than Y CAC." If you cannot name the number in advance, you will rationalize whatever result you get.
- Kill criteria: "If after 60 days total hosted-tier sign-ups from the channel are below 2, I shut it down and redirect effort." Kill criteria short-circuit the sunk cost bias.
The honest shape of channel discovery. Most channels fail; that is the mechanism working correctly. The skill is killing fast, not picking right the first time.
// 05Channel unit economics
A channel that produces sign-ups is not yet a working channel. It might produce sign-ups at a CAC higher than your LTV, which means every new team account is losing you money on a unit basis. With a $49/seat/mo hosted tier and a blended team of 4 seats (roughly $2,350 ACV), a $6K CAC implies a 12-month payback before you even factor churn. The OSS distribution channel (22K GitHub stars, ~3K weekly active OSS users, 1.8% conversion to the hosted tier) clears that bar comfortably; most paid channels do not.
Payback period, per channel
- SEO content: Long payback (4–6 months) but compounding curve. Demands cash runway.
- Paid ads: Pays back month-by-month but has a ceiling. Stops paying the moment you stop spending.
- Partnerships: Boom-or-bust. One great partner pays back a year of effort; many partners pay back nothing.
- Newsletter sponsorships: Steady output with modest compounding. Sits between SEO and paid in payback profile: a developer newsletter (Bytes, Pointer) sponsorship lands in 50K inboxes once and keeps trickling sign-ups for weeks.
Different channel shapes suit different cash positions. If you can afford a 6-month payback, SEO is superior. If you need leads this month, paid is the answer even at lower total ROI.
LTV:CAC ratio, per channel
The second layer of analysis asks not just when a channel breaks even, but whether each unit of spend is productive. LTV:CAC (introduced in Module 1.1) becomes the health check. The 3:1 minimum for a healthy channel is a useful heuristic.
Healthy channels sit between the 3:1 and 5:1 ratio lines. Below 1:1 you lose money. Above 5:1 you are likely under-spending and could scale more aggressively.
// 06From channels to system
Your GTM engine is not the sum of your channels. It is the composition of them: a directed graph where each layer feeds the next, and a feedback loop turns paying teams back into acquisition.
- Acquisition: Top-of-funnel channels that generate awareness and traffic. Job: attract the right people.
- Capture: Sign-up walls in the OSS docs, hosted-tier landing pages, GitHub Sponsors mentions, conference-talk follow-up forms. Job: turn warm OSS users into team accounts.
- Nurture: Email sequences, content follow-ups, social retargeting. Job: move opt-ins toward readiness to buy.
- Conversion: Hosted-tier free trials, team-trial onboarding, design-partner calls for larger teams. Job: turn engaged sign-ups into paying team accounts.
- Feedback: Customer talks at devops meetups (fuel for acquisition), engineer-to-engineer referrals inside the OSS community (new acquisition), GitHub stars and logos on the site (lift on conversion). Job: turn output back into input.
Hover any node to trace its connections. No single channel produces clients on its own; the engine is the composition of acquisition, capture, nurture, conversion, and feedback working together.
The feedback loop is what separates a funnel from an engine. A team that adopts the hosted tier writes an engineering blog post about how they use it, which earns a slot in a developer newsletter, which attracts new teams from the same archetype. The system compounds only when you close that loop deliberately.
// 07Six things to carry forward
- 01: 19 channels exist. One or two will run your business. Brainstorm wide, then cut hard.
- 02: ICE forces a ranking. Multiplicative scoring punishes a zero on any axis. Confidence and Ease matter as much as Impact.
- 03: The Bullseye keeps motion alive: outer (possible), middle (testing), inner (compounding). Channels move both directions.
- 04: Channel experiments need explicit success AND kill criteria, defined before the test. The kill is the forcing function.
- 05: Channel-level CAC and payback are the diagnostics. Blended numbers hide subsidies.
- 06: A working GTM engine is acquisition → capture → nurture → conversion → feedback. The feedback loop is what compounds.
You can run an experiment from this article in under five minutes.
Pick the strongest claim above. Pre-fill it as a real experiment in Xi — hypothesis, metric, success and kill thresholds — and you’ll have evidence by next month, not opinion.
Run an experiment