Methodology

The LBTR Method

Listen. Bet. Test. Respond. A marketing framework built on one principle: the market is always right, and your job is to hear what it's actually saying.

The Problem

Most marketing is built backward

Most marketing starts with a number. Someone decides how many leads the business needs, works backward to a budget, backward from the budget to a channel mix, backward from the channels to a content plan, and calls the whole thing a strategy. It looks organized. It might even look convincing in a deck.

But it was built backward. The outcome came first. Everything else was invented to justify it.

This is what we call backfilling — and it's the root cause of most marketing failure. When you start from a target and work backward to a plan, you're not building a strategy. You're building a story. A story about what the market will do, how customers will respond, which channels will perform, and what results will follow.

The story might be logical. It might be based on last year's numbers or an industry benchmark or what worked for someone else. But it's still a story — because it treats an uncertain future as if it were a settled fact.

The backfilling response is always the same. When the market doesn't cooperate — when leads cost more than projected, or a channel stops performing, or the campaign that crushed it last year lands flat this year — you defend the plan, blame execution, or chase a new tactic. The target never gets questioned. The story never gets examined.

What backfilling produces
  • Plans that look strategic but can't respond to new information
  • Budgets committed to channels before any evidence is collected
  • Teams defending fiction instead of reading reality
  • Expensive campaigns that "just need more time"
  • Marketing that compounds debt, not equity

Backfilling produces brittle marketing. It locks you into defending a fiction instead of responding to reality. LBTR was built to replace it.

The Framework

What LBTR is

LBTR is a marketing methodology built on the adaptive business framework — the principle that businesses grow by responding to current reality with what they actually have, not by declaring arbitrary futures and trying to force them into existence.

Applied to marketing, this means treating your market as a living system that you're in continuous conversation with — not a static target you're trying to hit. The market is always sending signals. Your competitors are moving. Customer behavior is shifting. Channels are evolving.

A marketing system that can hear those signals and respond to them will consistently outperform one that's locked into executing a predetermined plan.

LBTR moves through four stages in a continuous loop: Listen, Bet, Test, Respond. Not once — repeatedly, with each loop informing the next.

The methodology doesn't get stale because it's built to update itself. It gets smarter the longer you use it.

01
Listen
Read current reality before acting
02
Bet
Form a specific, sized hypothesis
03
Test
Run the minimum viable experiment
04
Respond
Update strategy based on what actually happened
The Four Stages

How each stage works —
and what it looks like in practice

Each stage has a distinct job. Skipping or compressing a stage doesn't save time — it reintroduces the backfilling that the methodology is designed to remove.

Stage 01
Listen

Read current reality before acting on it

Before any strategy, any budget allocation, any campaign — you look at what's actually happening. Not what you hoped would be happening. Not what worked eighteen months ago. What's real right now.

This means reading your market honestly: which channels are generating qualified leads versus vanity metrics, what your close rates actually look like by lead source, what customers are doing after they buy, what signals your pipeline is sending about where the real bottleneck is. It means distinguishing data that reveals from data that hides — because plenty of metrics look good while masking a real problem underneath.

It also means listening to what you actually have to work with. Your real capabilities, your real team, your real budget, your real relationships and reputation in the market. Not the resources you wish you had. The ones you actually have. Because the only place strategy can start from is where you actually are.

The Pipeline Audit — what deep listening looks like

The most revealing Listen exercise we run maps the full pipeline from first contact to gross profit: leads by source → estimates scheduled → jobs won → jobs completed → margin by job. Then we slice that data across every dimension that matters — project size, service type, sales rep or estimator, channel, geography, and job type.

We're looking for anomalies. Places where the surface-level story ("we're growing, margins are fine") hides a different story underneath. A sales team that's capacity-constrained is often spending 50% of their estimate time on project types that represent only 20% of their closed revenue. The constraint isn't leads. It isn't marketing spend. It's how the business's most limited resource — sales capacity — is being allocated. The Listen stage is what makes that visible.

Most businesses think they have a lead problem. The Listen stage more often reveals a conversion problem, a capacity problem, or a job-mix problem. Solving the wrong one is expensive. Solving the right one compounds.

Stage 02
Bet

Form a specific, sized hypothesis — not a plan

Once you've listened, you form a hypothesis. Not a plan, not a prediction, not a target — a bet. A conscious, specific, reasoned decision to invest time and money in something you believe might work, with full awareness that you might be wrong.

The word is intentional. Every marketing decision is a bet made under uncertainty. You don't know which message will resonate, which channel will perform, which offer will convert. Anyone who says otherwise hasn't been paying attention. The question isn't whether you're betting — it's whether you're betting well.

A good bet in LBTR is specific: a defined audience, a defined situation, a defined belief about what that audience values in that situation. It's grounded in what you actually heard in the Listen stage. It's sized to what you can afford to lose if you're wrong — because a bet you can't afford to lose forces you to need it to work, which is exactly where backfilling starts. And it's held openly, as a belief you're testing rather than a story you're defending.

In practice for a painting company: The Listen stage showed that 60% of inbound leads came from GBP, but the profile had 22 reviews at 4.4 stars — below the competitive threshold in the market. The Bet: "If we run a systematic review generation campaign for 60 days and reach 45+ reviews at 4.7+, we will see a measurable increase in GBP-sourced leads before peak season. We're willing to invest $800 and 6 hours of coordination to find out."

Stage 03
Test

Run the minimum investment that generates real information

A bet without a test is just an opinion. The test is how you find out if your belief holds up against actual market conditions.

LBTR tests are deliberately small — the minimum investment that will generate useful information. Not the minimum that might succeed, but the minimum that will teach you something real. Small tests are faster. They're cheaper to abandon when the signal says stop. They carry less ego, which means you're more willing to read the results honestly. And they can run in parallel, so you're learning from multiple hypotheses simultaneously.

Before a test runs, three things are defined: what you're watching for, what result would confirm the hypothesis, and what result would tell you to stop. This is what separates testing from hoping. When you define success and failure criteria before you start, the results have to mean something — you can't retroactively decide the data was noise because you didn't like what it said.

In practice for an HVAC company: The Bet was that a Local Services Ads campaign targeting emergency HVAC terms would produce leads at under $120 CPL in their market. Test parameters: $1,500/month budget, 30-day window, success threshold of 12 leads at or below the CPL target. If it hits the threshold, scale. If it doesn't within the window, stop and diagnose before spending more. The test ran. It produced 9 leads at $142 CPL. Below target — but enough signal to investigate before committing to scale.

Stage 04
Respond

Let the results change something — or they don't count

This is the step that separates adaptive marketing from every other kind. When results come back — from a test, from a campaign, from a channel, from a quarter — you update your thinking based on what actually happened.

Not what you hoped would happen. Not what the plan said should happen. What happened.

If the test confirmed the hypothesis, you scale it — carefully, while continuing to watch the signals. If the test contradicted the hypothesis, you stop, take the learning, and form a new bet. If the signals are mixed, you probe deeper before committing more resources. In every case, the result changes something — your model of what works, your budget allocation, your next hypothesis. Results that don't change anything aren't being read honestly.

Responding also means knowing when to stop. One of the most expensive things in marketing is continuing to run something that isn't working because you've already spent money on it. LBTR uses affordable loss limits — defined before the test runs — to force that decision cleanly, before sunk cost takes over.

Then the loop starts again. You go back to listening, now with more information than you had before. Each loop makes the next bet sharper, the next test cleaner, the next response faster. The methodology compounds over time — which is the opposite of a plan that goes stale the moment the market moves.

Clarifications

What LBTR is not

A methodology defined by what it is can be misunderstood. These are the most common misreadings.

Not This

Anti-planning or anti-analysis

LBTR uses forward-looking models — demand seasonality, historical win rates, channel trends, competitive dynamics. These are useful inputs that inform better bets. The difference is that we treat them as information rather than commitments. They shape hypotheses. They don't become the story we've decided is true. When the model says one thing and the market says another, the market wins.

Not This

Passive or purely reactive

Listening carefully before acting, forming specific hypotheses, designing clean tests, and reading results honestly requires more discipline than building an annual plan and executing it. It's a different kind of discipline — one that keeps you connected to what's real rather than what you predicted. LBTR is not reactive. It is responsive. The difference is whether you're chasing or deciding.

Not This

A one-time engagement or campaign

LBTR is not a project with a defined end date. It's a way of operating. The value compounds as the loop runs — each cycle builds the knowledge base that makes the next cycle smarter. A business that has run 12 LBTR cycles has a significant structural advantage over one that has run 3 — not because it spent more, but because it learned faster and stopped spending on things that don't work.

In Practice

A full LBTR loop — finding and
eliminating a hidden growth ceiling

One of the most powerful applications of LBTR is uncovering capacity constraints that don't appear on any dashboard. This is a complete loop for a residential contractor doing $10M who had plateaued and couldn't identify why.

Weeks 1–3
Listen

We run the full pipeline audit: every lead for the past 18 months mapped from first contact through to completed job and gross profit. Then we slice it. By project revenue size (bucketed: under $5K, $5K–$15K, $15K–$40K, $40K+). By service type. By channel. By estimator. By geography.

The surface story: healthy revenue, solid margins, one good estimator who closes well. The anomaly the data reveals: that estimator is running 60–70 appointments per month, and 48% of those appointments are for projects under $5,000. When we calculate time-cost per estimate, the sub-$5K appointments are consuming roughly half of the estimator's available capacity.

Here's the number that changes the conversation: those sub-$5K projects represent only 19% of total closed revenue. The estimator is spending half their time on the work that generates a fifth of the results.

What we hear: The constraint isn't leads. The website is fine. The ads are running. The real ceiling is how sales capacity is being allocated. More leads would make this worse, not better — the estimator is already stretched.

Week 3–4
Bet

We run an alternate-reality calculation using last year's actual data: what would the numbers have looked like if we hadn't done any estimates under $5,000?

Result: 48% fewer total estimates run. Total closed revenue reduction: 19%. Sales capacity freed: roughly 50%. The math is stark — cut half the appointments, lose a fifth of the revenue. Which means the estimator now has room to pursue more of the work that actually moves the number.

The Bet: "If we implement a pre-qualification screen that stops scheduling estimates on projects likely to come in under $5,000, and the estimator uses that freed capacity to run 30% more $15K+ appointments, total closed revenue will increase by 15–22% within one full season — without adding headcount, marketing spend, or leads."

We write the assumptions explicitly before anything gets implemented:

  • Lead mix at the $15K+ tier remains roughly similar to last year
  • Close rate improves when the estimator has more time per appointment
  • Average project size lands around $18,000 across the $15K+ tier
  • Pre-qualification screening doesn't create significant friction with inbound leads

These assumptions aren't obstacles — they're the specific things we'll watch for in the Test phase. If they hold, we scale. If they don't, we know exactly what to adjust.

Weeks 4–16
Test

A pre-qualification step goes into the lead intake process. The person answering the phone now asks two questions before scheduling an estimate: what kind of project, and roughly what scope. Any inquiry that signals under $5,000 in likely project value gets a phone estimate or a referral — not a full in-person appointment. The estimator doesn't run those appointments.

At 90 days, we review the data against the assumptions:

Confirmed ✓

Close rate on $5K+ estimates improved from 34% to 41%. The estimator, with more time per appointment, was doing more thorough follow-up and presenting better proposals. Assumption held.

Adjusted ≈

Average project size came in at $16,500 — not the $18,000 hypothesized. Revenue improvement still real, but the ceiling on the hypothesis was slightly optimistic. Filed for the next bet.

Unexpected ↗

Pre-qualification calls revealed that 28% of "small project" inquiries were actually larger scope when properly scoped during the call. These were being undersold at first contact. New signal for the next Listen cycle.

Monitoring —

No measurable friction from screening with inbound leads. The owner expected pushback; it didn't materialize. Assumption held, but worth watching as the process continues to scale.

Week 16+
Respond

The lead screening process is made permanent and documented as an SOP. The $5,000 threshold is refined to $4,000 based on test data — a few projects in the $4–5K range were worth doing given their referral and repeat potential, and the data showed it.

The $16,500 average ticket (vs. $18,000 hypothesized) opens a new LBTR loop: the discovery call is now a variable worth testing. Hypothesis: if the pre-qualification call includes two specific scope questions, it will better surface full project needs early — and move the average ticket from $16,500 toward $18,000 by reducing scope underestimation at first contact.

The 28% of undersold inquiries opens a third loop: what if we trained the phone intake on basic scope assessment? Could that lift the ticket average further and reduce the number of estimates that come in below expectation?

Revenue outcome at 6 months: 14% increase in closed revenue versus the same period the prior year. No increase in marketing spend. No new hires. One change to the intake process — and the knowledge that two new LBTR loops are already forming from what the test revealed.

This is one type of LBTR loop. A channel optimization loop — testing a new marketing channel or message — typically runs in 4–8 weeks. A sales capacity loop like this one runs over a quarter. A brand or positioning loop might run over two seasons. The methodology scales to the complexity and time horizon of the bet. What doesn't change is the discipline: listen before acting, name your assumptions, test small, let the results change something real.

Who It's For

LBTR is for businesses
ready to stop backfilling

Situation

Burned by marketing built on targets

For owners who've watched budgets disappear into campaigns that "just needed more time." For marketing directors who are tired of defending plans they knew were fiction when they wrote them. For anyone who has ever been told to just generate more leads when the actual problem was somewhere else entirely.

Revenue Stage

$3M–$25M home service contractors

Large enough to have real marketing complexity and a meaningful budget at stake. Not so large that you need an internal marketing department. The LBTR Method is designed for businesses where the owner or a fractional leader is closely involved in marketing decisions.

Mindset

Ready to let the market tell the truth

For businesses ready to stop forcing a predetermined story onto an uncertain market, and start building the capability to hear what the market is actually saying — and respond to it intelligently. That's not a limitation. That's the advantage.

FAQ

Common questions about
the LBTR Method

What does LBTR stand for?
LBTR stands for Listen, Bet, Test, Respond. It is a four-stage marketing methodology developed by Isaac Holmgren at Labtorio, designed for home service and residential construction companies. The methodology runs as a continuous loop: listening to current market reality, forming a specific hypothesis (a bet), running the minimum viable experiment, and responding to what the results actually show — then repeating.
How is LBTR different from traditional marketing planning?
Traditional marketing planning starts with a revenue target and works backward to a strategy. LBTR calls this backfilling — and it is the root cause of most marketing failure. The LBTR Method starts from current reality, not a declared future. It treats market investments as bets held openly (a hypothesis you're testing) rather than commitments defended against evidence. And it mandates that results actually change the strategy — not just get reported in a monthly deck. When the model says one thing and the market says another, the market wins.
Can LBTR work for a smaller contractor?
Yes. LBTR is specifically designed for businesses where every marketing dollar has real consequence — which describes every home service contractor doing under $25M. The methodology scales with the business: smaller companies run smaller, faster tests; larger companies run more tests in parallel. The core discipline — listening before acting, forming a specific hypothesis, testing small, responding honestly — is the same at any scale. It works particularly well for contractors doing $3M–$25M where the owner is closely involved in marketing decisions.
How long before the LBTR Method produces results?
The first Listen stage typically surfaces actionable insight within 2–3 weeks. The first full Bet-Test-Respond cycle typically completes within 30–60 days. Meaningful revenue impact is typically visible within one full season — 90–180 days depending on the trade and market. The methodology compounds: each loop builds the knowledge base that makes the next loop faster and more accurate. A business that has run 12 LBTR cycles has a structural advantage over one that's run 3 — not because it spent more, but because it learned faster.
Is LBTR anti-planning or anti-analysis?
No. LBTR uses forward-looking models — demand seasonality, historical win rates, channel trends, competitive dynamics. These are useful inputs that inform better bets. The difference is that we treat them as information rather than commitments. They shape hypotheses. They do not become the story we've decided is true. Annual plans, quarterly reviews, and budget projections all have a place in the LBTR framework — as structured inputs to better listening, not as the predetermined outcomes the strategy has to justify.
What is backfilling, and why does it fail?
Backfilling is the practice of deciding on a revenue target first and then inventing a strategy to justify it. The plan starts with a declared outcome, works backward to a budget, backward to a channel mix, and calls the result a strategy. It fails because it treats an uncertain future as a settled fact. When the market doesn't cooperate, the backfilling response is always the same: defend the plan, blame execution, or chase a new tactic. The target never gets questioned. The story never gets examined. Backfilling produces brittle marketing — marketing that can't update itself because updating would mean admitting the original story was wrong.
What is the LBTR Pipeline Audit, and what does it reveal?
The Pipeline Audit is the standard Listen-stage exercise Labtorio runs at the start of every engagement. It maps the full sales pipeline — every lead from first contact through to completed job and gross profit — then slices that data across multiple dimensions simultaneously: project size, service type, channel, estimator, job type, and geography. The goal is to find anomalies — places where the surface story hides a different story underneath. Common findings include sales teams spending the majority of their estimate capacity on project sizes that generate the minority of their revenue; channels that look strong on lead volume but convert at half the rate of other sources; and service types that carry good margins but consume disproportionate overhead or management time. The Pipeline Audit almost always reveals that the real growth constraint is not "we need more leads." It is typically a capacity allocation problem, a conversion problem, or a job-mix problem — and each of those has a different solution.
Next Step

See the LBTR Method
applied to your business

30-minute intro call. We'll listen to where you are, identify the real constraint, and tell you honestly what the first bet should be.

Book an Intro Call