DM-008·// GROWTH··16 min read

The GTM engine: ICE, Bullseye, and how to actually pick the next channel

Most companies pick channels based on what worked at their last job, or what is in vogue. Both are bad heuristics. Here is the structured framework that beats them.

Nineteen channels exist. One or two will run your business. The job of a GTM engine is to figure out which two, in what sequence, and how to know when one stops working. This guide is the prioritization framework: the channel universe, ICE scoring, the Bullseye rings, channel experiments with kill criteria, and the channel-level unit economics that decide what scales.

Most companies pick channels based on what worked at their last job, or what is in vogue. Both are bad heuristics. A founder who finishes a channel course feels competent at SEO, email, and LinkedIn, and attempts all three at once. Six months later: half a blog, an email list of 40, and a posting habit that lapsed in week eight. None compounded, because none received the focused attention a channel needs.

// 01The channel universe

The first mistake in GTM planning is assuming the menu is short. Founders consider 3 or 4 channels they already know, pick the easiest, and call that a strategy. The authors of Traction catalogued 19 distinct acquisition channels: paid search, SEO, content, email, social, partnerships, affiliate, business development, sales, viral, engineering as marketing, unconventional PR, public relations, trade shows, offline events, speaking, community building, existing platforms, target market blogs.

// FIGURE 01 · INTERACTIVE
Every acquisition channel ever used to build a startup

The 19-channel universe from Weinberg & Mares' Traction. Each dot is one channel, sized by fit for a service business. Click to inspect; most channels will be wrong for you, and that is the point.

Organic
Paid
Outbound
Relational
PR
19 CHANNELS1 CHOICESearch Engine Optimi…Content MarketingTargeting BlogsEngineering as Marke…Viral MarketingSearch Engine Market…Social & Display AdsOffline AdsEmail MarketingBusiness DevelopmentSalesAffiliate ProgramsSpeaking EngagementsCommunity BuildingTrade ShowsOffline EventsExisting PlatformsPublicityUnconventional PR
OrganicFit: 5/5
Search Engine Optimization
The buyers literally Google 'why is my landing page not converting.' Highest-fit channel for the service, longest payback curve.
The “obviously wrong” cut.For an OSS-funded devtool selling a hosted SaaS tier, more than half of the 19 are obviously wrong. Trade shows rarely pay back at $49/seat. Public relations to mainstream press doesn’t reach platform engineers. Affiliate programs flop because nobody “recommends” infra tools for a kickback. Reject these explicitly rather than letting them sit as options you “could” try.

// 02ICE prioritization

  • Impact: If this channel worked well, how much would it move the needle?
  • Confidence: How sure are you this channel will actually work for your specific context? Based on evidence, analogies, past attempts.
  • Ease: How much effort, time, and cash does it take to run a meaningful test?
ICE score = Impact × Confidence × Ease

Use a 1-to-10 scale. Multiplicative punishes a zero on any axis (a channel nobody believes in or that is impossible to test rarely becomes the winner, even with huge theoretical impact).

// FIGURE 02 · INTERACTIVE
Impact × Confidence × Ease, applied to your channels

Each bubble is a candidate channel. X axis is Confidence (how sure you are it will work). Y axis is Impact (ceiling if it works). Bubble size is Ease. Click to inspect.

SWEET SPOTCONFIDENCE →IMPACT →
Rank #1 · ICE = 567
Free audit funnel (lead magnet)
Impact
9
Conf
9
Ease
7
Highest ICE score. You already know how to deliver the audit, you just need the entry page. This should be your anchor channel.
Ranked by ICE score (I × C × E)
#1Free audit funnel (lead magnet)567
#2Reddit + IH commenting432
#3LinkedIn organic posting343
#4SEO content (landing-page keywords)315
#5Cold email outreach180
#6Google Ads (search intent)168
#7Podcast guest appearances105
#8Agency partnerships (referrals)81
#9YouTube tear-downs64

Common scoring mistakes

  • Overweighting Impact: A channel that scores 9 on Impact and 2 on everything else has an ICE product of 36, near the bottom. Take Confidence and Ease as seriously as Impact.
  • Confusing Confidence with optimism: Confidence is "what evidence do I have." If you cannot name one specific piece (a competitor doing it, a data study, a past attempt), confidence is 3 or lower, not 7.
  • Ignoring opportunity cost: Ease is relative. A channel taking 10 hours/week is easy if your others take 20, hard if your others take 3.

// 03The Bullseye framework

ICE produces a ranked list. The Bullseye determines what you do with it.

  • Outer ring (possible): All plausible channels: your full ICE-ranked list. Intentionally inclusive.
  • Middle ring (promising): 3 to 5 channels in active testing. Small, time-boxed experiments running in parallel with explicit success and kill criteria.
  • Inner ring (core): 1 to 2 channels that have earned disproportionate investment. Compounding engines. Most time, budget, attention.
// FIGURE 03 · INTERACTIVE
Three rings, one motion, inward

Channels start in the outer ring, graduate to the middle as they show signal, and land in the inner ring once they are proven. Toggle between the three phases.

POSSIBLEPROMISINGCORESEOLinkedInRedditFree AuditCold EmailGoogle AdsPodcastPartnershipsYouTubeLinkedIn AdsTwitter/XSpeakingPublicityCommunity
Current phase
Month 1 · Brainstorm
List every plausible channel without filtering. The goal is to avoid blind spots, not to decide. You will reject most of these, but you cannot reject what you never considered.
Why only 1 or 2 in the inner ring. Compounding requires depth. A primary channel almost always needs 5–10 hours per week of focused attention to reach its potential. Split that across 4 channels and none compound. Diversification is usually a symptom of not yet knowing what works. Once you know, concentration is the point.

// 04Running channel experiments

Every channel experiment needs five things specified before you start:

  • Hypothesis: "A deep technical blog post on the problem the OSS solves will generate at least 5 sign-ups for the hosted tier within 60 days."
  • Smallest test: Cheapest version that produces decision-quality evidence. You do not need a 12-post content series; you need 3 deeply technical posts that match how engineers already search for the pain.
  • Time box: Fixed duration. 30 / 60 / 90 days depending on the channel. SEO and content need longer because of payback lag. Paid and outreach can be judged in 30.
  • Success criteria: "Generates at least X hosted-tier sign-ups at less than Y CAC." If you cannot name the number in advance, you will rationalize whatever result you get.
  • Kill criteria: "If after 60 days total hosted-tier sign-ups from the channel are below 2, I shut it down and redirect effort." Kill criteria short-circuit the sunk cost bias.
// FIGURE 04
From 19 channels to 1 that works

The honest shape of channel discovery. Most channels fail; that is the mechanism working correctly. The skill is killing fast, not picking right the first time.

Channels considered
19
Worth a serious look
9
47% survive
Selected for testing
4
44% survive
Show positive signal
2
50% survive
Scale into core
1
50% survive
Channels consideredThe full traction universe. You rule most of these out in 10 minutes based on obvious fit.
Worth a serious lookOrganic content, community, outreach, paid search, partnerships, a few others. You have real reasons these might work.
Selected for testingAfter ICE scoring, the shortlist. These get small time-boxed experiments running in parallel.
Show positive signalMeet your pre-defined success criteria inside the test window. Usually half the tests fail.
Scale into coreOnly one or two channels actually earn the right to double-down. This is the inner ring of the bullseye.
Kill fast, kill explicitly.By week 6 of an underperforming channel, you have invested enough that abandoning it feels like admitting failure. Kill criteria defined in advance short-circuit this. The kill is the forcing function. Without it, every channel stays “promising” forever.

// 05Channel unit economics

A channel that produces sign-ups is not yet a working channel. It might produce sign-ups at a CAC higher than your LTV, which means every new team account is losing you money on a unit basis. With a $49/seat/mo hosted tier and a blended team of 4 seats (roughly $2,350 ACV), a $6K CAC implies a 12-month payback before you even factor churn. The OSS distribution channel (22K GitHub stars, ~3K weekly active OSS users, 1.8% conversion to the hosted tier) clears that bar comfortably; most paid channels do not.

Payback period, per channel

  • SEO content: Long payback (4–6 months) but compounding curve. Demands cash runway.
  • Paid ads: Pays back month-by-month but has a ceiling. Stops paying the moment you stop spending.
  • Partnerships: Boom-or-bust. One great partner pays back a year of effort; many partners pay back nothing.
  • Newsletter sponsorships: Steady output with modest compounding. Sits between SEO and paid in payback profile: a developer newsletter (Bytes, Pointer) sponsorship lands in 50K inboxes once and keeps trickling sign-ups for weeks.
// FIGURE 05 · INTERACTIVE
Cumulative profit curves, same budget, different shapes

Different channel shapes suit different cash positions. If you can afford a 6-month payback, SEO is superior. If you need leads this month, paid is the answer even at lower total ROI.

0M0M3M6M9M12MONTHS →CUMULATIVE $ →break-even M9$36,125
SEO content (compounding): Heavy upfront investment in writing, near-zero return for 4 to 6 months, then exponential compounding as articles rank. Long payback but highest ceiling.

LTV:CAC ratio, per channel

The second layer of analysis asks not just when a channel breaks even, but whether each unit of spend is productive. LTV:CAC (introduced in Module 1.1) becomes the health check. The 3:1 minimum for a healthy channel is a useful heuristic.

// FIGURE 06 · INTERACTIVE
Where each channel lives on the efficiency map

Healthy channels sit between the 3:1 and 5:1 ratio lines. Below 1:1 you lose money. Above 5:1 you are likely under-spending and could scale more aggressively.

1:13:15:1CAC ($) →LTV ($) →500100015002000200040006000SEO contentFree audit funnelLinkedIn organicReddit commentingGoogle AdsLinkedIn AdsPartnerships
Under-invested (could spend more)
Free audit funnel
CAC
$180
LTV
$3500
Ratio
19.4:1
Low CAC because the audit itself pre-sells the engagement. Your highest-leverage conversion mechanism.
Blended CAC hides the truth. A blended $6K CAC might include OSS-driven sign-ups at $1,200 and Google Ads on high-intent BOFU keywords at $14K per team. You are overinvesting in the second and underinvesting in the first. Channel-level CAC is the input to channel-level decisions. Blended CAC is only useful as a top-line metric, never as a diagnostic.

// 06From channels to system

Your GTM engine is not the sum of your channels. It is the composition of them: a directed graph where each layer feeds the next, and a feedback loop turns paying teams back into acquisition.

  • Acquisition: Top-of-funnel channels that generate awareness and traffic. Job: attract the right people.
  • Capture: Sign-up walls in the OSS docs, hosted-tier landing pages, GitHub Sponsors mentions, conference-talk follow-up forms. Job: turn warm OSS users into team accounts.
  • Nurture: Email sequences, content follow-ups, social retargeting. Job: move opt-ins toward readiness to buy.
  • Conversion: Hosted-tier free trials, team-trial onboarding, design-partner calls for larger teams. Job: turn engaged sign-ups into paying team accounts.
  • Feedback: Customer talks at devops meetups (fuel for acquisition), engineer-to-engineer referrals inside the OSS community (new acquisition), GitHub stars and logos on the site (lift on conversion). Job: turn output back into input.
// FIGURE 07 · INTERACTIVE
Channels are nodes in a graph, not line items in a budget

Hover any node to trace its connections. No single channel produces clients on its own; the engine is the composition of acquisition, capture, nurture, conversion, and feedback working together.

ACQUISITIONCAPTURENURTURECONVERSIONSEO ContentLinkedIn PostsReddit CommentsPartnershipsLanding PageFree AuditEmail ListNurture SequenceSales CallClientCase StudyReferral
Forward flow
Feedback loop

The feedback loop is what separates a funnel from an engine. A team that adopts the hosted tier writes an engineering blog post about how they use it, which earns a slot in a developer newsletter, which attracts new teams from the same archetype. The system compounds only when you close that loop deliberately.

// 07Six things to carry forward

  • 01: 19 channels exist. One or two will run your business. Brainstorm wide, then cut hard.
  • 02: ICE forces a ranking. Multiplicative scoring punishes a zero on any axis. Confidence and Ease matter as much as Impact.
  • 03: The Bullseye keeps motion alive: outer (possible), middle (testing), inner (compounding). Channels move both directions.
  • 04: Channel experiments need explicit success AND kill criteria, defined before the test. The kill is the forcing function.
  • 05: Channel-level CAC and payback are the diagnostics. Blended numbers hide subsidies.
  • 06: A working GTM engine is acquisition → capture → nurture → conversion → feedback. The feedback loop is what compounds.
// PUT IT TO WORK

You can run an experiment from this article in under five minutes.

Pick the strongest claim above. Pre-fill it as a real experiment in Xi — hypothesis, metric, success and kill thresholds — and you’ll have evidence by next month, not opinion.

Run an experiment