Workflow · Calibration ~25 min run HubSpot connector

Generic values teach generic shapes.
Your data has a specific shape.

A copy-paste Claude prompt that derives per-tier dollar values from your actual closed-won data, ICP scoring rubric, and sales cycle. Stop using generic $100 / $900 / $3,000 / $10,000+ placeholders. Calibrate Tier A / B / C / Excluded values across MQL / SQL / Opportunity / Closed-Won to your brand's real conversion economics.

4×4cells
Stages × Tiers per ICP segment
25-40%lift
Typical algorithm signal improvement
12mowindow
Closed-won data cohort required
Quarterlyrefresh
Recalculate as data accumulates
01 The Problem in 60 Seconds

The example values aren't yours.
They never were.

A B2B SaaS team reads "send tiered offline conversions: $100 MQL, $900 SQL, $3,000 Opportunity, $10,000+ Closed-Won." They configure those exact values in HubSpot and ship. The values worked in the example because the example was a hypothetical mid-market brand. The team's brand isn't a hypothetical mid-market brand — they're enterprise with $80K ACV, or SMB with $8K ACV, or mid-market but with a 30-day cycle, or mid-market with a 180-day cycle. Generic values teach the algorithm a generic economic shape that doesn't match the team's actual unit economics. The algorithm spends 90 days optimizing toward an MQL:SQL multiplier that's wrong by 2-3x, then the team blames "ad platform" when CAC stays flat.

The deeper problem is that tier values are the algorithm's compass, and a wrong compass is worse than no compass. With no offline conversions configured, the algorithm at least optimizes for the form-fill volume signal it has. With incorrectly-calibrated tier values, the algorithm aggressively pursues low-value Tier C accounts (because their Tier C value of $50 is too high relative to your actual Tier C conversion rate) or under-pursues Tier A accounts (because their Tier A value of $1,000 is too low relative to your actual Tier A revenue). Calibration matters more than configuration.

This workflow runs structured tier value calculation. Claude takes your closed-won deal data + ICP rubric + sales cycle and derives per-tier values across 4 stages × 4 tiers = 16 cells per ICP segment. Each cell value is grounded in your actual conversion economics: probability of reaching that stage from this tier × time-discounted ACV × target CAC payback. Run quarterly as closed-won data accumulates and unit economics shift.

4-Stage Value Pipeline · Each Stage Earns More As Confidence Increases Values increase with conversion certainty
Stage 1 · Earliest MQL Marketing-qualified. Most prone to noise. Modest value to avoid signal flooding. Typical: $50-200
Stage 2 · Highest leverage SQL Sales-qualified. Primary algorithm signal. Calibrate carefully. Typical: $600-1,200
Stage 3 · 2-4× SQL Opportunity Active deal. Higher confidence. Strong commit signal. Typical: $2K-4K
Stage 4 · = ACV Closed-Won Revenue. Equals deal ACV (or weighted ACV if multi-product). Typical: $15K-100K+
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — closed-won data, ICP rubric tiers, win rates by stage and tier, target CAC payback — are the parts you edit. Run quarterly to recalibrate as closed-won data accumulates.

claude_desktop — tiered_conversion_calc.md
RoleYou are calculating per-tier conversion values for my B2B SaaS ad-algorithm signal layer. Take my actual closed-won data, ICP rubric tiers, and sales cycle. Derive per-tier values across 4 stages × 4 tiers = 16 cells per ICP segment. Calibrate to my unit economics, not generic placeholders. My BrandBrand: [your B2B SaaS brand name] Average ACV (last 12 months closed-won): [e.g. "$25,000 mid-market"] Average sales cycle (form fill to closed-won): [e.g. "84 days"] Target CAC payback: [e.g. "12 months for mid-market" — see Track 05's strategy foundation for benchmarks] ICP Rubric Tiers// Pull from Track 05's ICP Scoring Rubric Builder. If not yet run, define tiers based on firmographic + technographic + intent fit. Tier A definition + score range: [e.g. "ICP score 80+ — strict ICP fit, enterprise-size, expansion-ready"] Tier B definition + score range: [e.g. "ICP score 60-79 — fits ICP but with 1-2 gaps; SMB-mid market"] Tier C definition + score range: [e.g. "ICP score 40-59 — partial fit; lower probability buyers"] Excluded definition: [e.g. "ICP score <40 — non-fit; don't optimize for these"] Closed-Won Data (Last 12 Months)// Pull from HubSpot. If < 12 months data available, use what's available + flag low-confidence calibration. Total closed-won deals: [e.g. "127 deals"] Closed-won by tier: [e.g. "Tier A: 78 deals (61%) · Tier B: 38 deals (30%) · Tier C: 11 deals (9%)"] ACV by tier: [e.g. "Tier A: $32K avg · Tier B: $19K avg · Tier C: $11K avg"] Sales cycle by tier: [e.g. "Tier A: 72 days · Tier B: 88 days · Tier C: 105 days"] Funnel Conversion Rates// Cohort calculation: rates from MQL→SQL must use time-lagged cohorts (don't compare same-month MQLs and SQLs for cycles > 30 days). MQL → SQL conversion rate: [e.g. "Tier A: 38% · Tier B: 22% · Tier C: 9% · blended: 28%"] SQL → Opportunity rate: [e.g. "Tier A: 70% · Tier B: 52% · Tier C: 31% · blended: 60%"] Opportunity → Closed-Won rate: [e.g. "Tier A: 35% · Tier B: 22% · Tier C: 12% · blended: 28%"] TaskCalculate per-tier values across 4 stages: 1. Closed-Won values per tier: - Tier A CW value = Tier A average ACV (full deal value) - Tier B CW value = Tier B average ACV - Tier C CW value = Tier C average ACV - Excluded CW value = $0 2. Opportunity values per tier: - Tier A Opp value = Tier A CW value × Tier A Opp→CW rate - Tier B Opp value = Tier B CW value × Tier B Opp→CW rate - Tier C Opp value = Tier C CW value × Tier C Opp→CW rate - Excluded Opp value = $0 3. SQL values per tier: - Tier A SQL value = Tier A Opp value × Tier A SQL→Opp rate - Same formula for Tier B, Tier C - Excluded SQL value = $0 4. MQL values per tier: - Tier A MQL value = Tier A SQL value × Tier A MQL→SQL rate - Same formula for Tier B, Tier C - Excluded MQL value = $0 - Apply MQL noise discount (multiply by 0.6-0.8 to account for MQL signal volatility — see calibration notes) 5. Target CAC payback adjustment: - Compare derived MQL value to target MQL CAC - If derived value > 2x target CAC → values are too high; algorithm will over-bid - If derived value < 0.5x target CAC → values are too low; algorithm will under-bid - Adjust derived values within reasonable bounds (typically 0.7-1.3x of pure mathematical derivation) to meet payback target 6. Per-tier value differential check: - Tier A:Tier B ratio: should be 1.5-3x (not too aggressive, not too compressed) - Tier B:Tier C ratio: should be 5-15x (Tier C accounts close at much lower rates; algorithm should rarely pursue them) - Tier C:Excluded: hard cutoff at $0 for Excluded - If ratios fall outside these bounds, flag for review Output format1. Headline: total deals analyzed, average ACV by tier, derived value summary, calibration confidence (high/medium/low based on data volume). 2. 4×4 value matrix: rows = stages (MQL/SQL/Opp/CW), columns = tiers (A/B/C/Excluded). Each cell = derived dollar value. 3. Payback assumption breakdown: target MQL CAC, target SQL CAC, projected CAC payback at derived values, headroom or shortfall. 4. Per-platform deployment specs: Google Ads (offline conversion event values), LinkedIn Ads (CAPI conversion values), HubSpot lifecycle event mapping. 5. Honest calibration: - If closed-won data is < 50 deals over 12 months, calibration confidence is LOW. Use derived values directionally but expect re-calibration after more data accumulates. - If Tier A:B ratio falls outside 1.5-3x, the ICP rubric may need refinement. Re-check Track 05's ICP Rubric Builder before deploying. - If derived MQL values are < $30 across all tiers, the MQL signal is too weak to provide useful algorithm signal. Consider sending only SQL/Opp/CW events. - If sales cycle is < 30 days, MQL noise discount should be 0.5-0.6 (faster cycles → noisier MQLs). If > 120 days, discount should be 0.8-0.9. - For PLG companies, add a Product Activation event between MQL and SQL valued at 0.5x SQL value. Activation is more reliable than MQL for PLG. - If you don't have offline conversions deployed yet (Track 05's Google + LinkedIn CAPI setup), values are theoretical until deployment. Run those workflows first. // Be specific in calculations. Show the math: "Tier A SQL value = $32K × 35% (Opp→CW) × 70% (SQL→Opp) = $7,840" not just "Tier A SQL = $7,840." // Apply common-sense rounding to deployable values. $1,047.83 rounds to $1,000. $187.42 rounds to $200. Algorithms don't need precision; round numbers reduce drift over time. // MQL noise discount is critical. Without it, MQL values are mathematically correct but operationally over-weighted. The 0.6-0.8 discount accounts for signal volatility. // Don't recommend changing tier definitions based on calculation output. The calculator values existing tiers; rubric refinement is the ICP Rubric Builder's job.
i
Track 05 prerequisites: ICP Scoring Rubric Builder defines the tiers; this workflow values them; Google Offline Conversions + LinkedIn CAPI Setup deploy them. The three workflows form a sequence — all are needed for the full ad-algorithm signal stack. Pull closed-won data via Full-Funnel Deal Journey if HubSpot exports are awkward.
03 What Claude Returns

The 4×4 matrix. The deployment specs.

Sample output for a hypothetical mid-market B2B SaaS brand with 127 closed-won deals, $25K average ACV, 84-day sales cycle. Calculator produces 16 cell values across 4 stages × 4 tiers, calibrated to actual conversion economics.

Claude · tier value calculation

Calculating per-tier values from 127 closed-won deals across 12 months. Calibrating to 84-day sales cycle and 12-month CAC payback target.

growthspree.hubspot.deals.closed_won { window: "last_12mo", group_by: "icp_tier", metrics: ["acv","cycle_days","stage_conversion_rates"] }
calculate closed-won values × stage conversion rates × MQL noise discount × CAC payback target = 4×4 matrix

127 deals analyzed. Tier distribution: 78 Tier A (61%) · 38 Tier B (30%) · 11 Tier C (9%). Average ACV: Tier A $32K · Tier B $19K · Tier C $11K. Calibration confidence: HIGH (127 deals over 12 months provides reliable cohort data). Derived values calibrated to 12-month CAC payback target. Algorithm signal lift forecast: 28-35% improvement in cost per SQL within 90 days of deployment.

4×4 Value Matrix · 4 Stages × 4 Tiers Send these values via offline conversions + CAPI
Stage
Tier A (ICP 80+)
Tier B (60-79)
Tier C (40-59)
Excluded (<40)
CW
$32,000avg ACV
$19,000avg ACV
$11,000avg ACV
$0hard cutoff
OPP
$11,200CW × 35%
$4,180CW × 22%
$1,320CW × 12%
$0hard cutoff
SQL
$7,840Opp × 70%
$2,170Opp × 52%
$410Opp × 31%
$0hard cutoff
MQL
$2,200SQL × 38% × 0.7 noise
$340SQL × 22% × 0.7 noise
$30SQL × 9% × 0.7 noise
$0hard cutoff
Payback Assumption Breakdown · CAC Targets vs Derived Values 12-month payback target · Mid-market 80% gross margin
Target SQL CAC $1,800 Derived from $32K ACV × 80% margin / 12mo / blended SQL→CW rate. Tier A SQL value $7,840 = 4.4× target CAC — strong algorithm headroom.
Target MQL CAC $520 Derived MQL Tier A value $2,200 = 4.2× target CAC. Healthy headroom — algorithm will bid up to ~$2K for Tier-A-fit MQLs without breaking payback.
Tier A:B Ratio 3.6×(SQL) Slightly above the 1.5-3× recommended band. Tier B accounts may be under-pursued. Consider compressing to 2.5× via raising Tier B values 20%.
Tier B:C Ratio 5.3×(SQL) Within 5-15× recommended band. Tier C accounts get small but non-zero signal — algorithm pursues them only when other options exhausted.
Per-Platform Deployment Specs · Hand to RevOps Sequence: Google → LinkedIn → HubSpot lifecycle mapping
Google Ads · Offline Conversions API
Configure 4 offline conversion events: mql_tier_a ($2,200), sql_tier_a ($7,840), opp_tier_a ($11,200), cw_tier_a ($32,000). Repeat per tier (Tier B and Tier C). Total: 12 events × 3 tiers = 12 events (Excluded sends nothing). Set conversion_action_value per event. Round to nearest $100. Deploy via Google Ads OfflineConversionUpload — see Track 05's setup workflow.
LinkedIn Ads · CAPI
Configure 4 LinkedIn conversions per tier: linkedin_mql_tier_X, linkedin_sql_tier_X, etc. Set conversionValue.amount per event. LinkedIn CAPI uses tier-included event names (not tier-as-separate-property). Deploy via LinkedIn Conversions API — see Track 05's HubSpot → LinkedIn CAPI Setup workflow. Same dollar values as Google Ads (don't differentiate by platform — algorithm should learn the same economic shape).
HubSpot · Lifecycle Event Mapping
HubSpot workflows fire offline conversion events when contact lifecycle stages change. Lifecycle stage Marketing Qualified Lead + ICP Tier property = fire mql_tier_X event with corresponding value. Lifecycle stage Sales Qualified Lead + tier = fire sql_tier_X. Etc. Deduplicate at GCLID/LinkedIn-click-id level to prevent double-counting. Operations Hub workflow handles the lookup-and-fire logic.
16 calibrated values across 4 stages × 4 tiers, ready for deployment. Sequencing recommendation: Week 1 — RevOps configures all 12 conversion events in Google Ads + LinkedIn Ads + HubSpot. Test with 5-7 sample contacts. Week 2-4 — Live deployment. Monitor algorithm bidding behavior daily for first 2 weeks; expect 5-10% bid volatility as algorithms recalibrate. Week 4-12 — Steady-state operation. Cost per SQL should improve 28-35% within 90 days. Re-run this calculator quarterly as closed-won data accumulates and unit economics shift. Want me to also generate the actual HubSpot workflow JSON for the lifecycle event mapping, or proceed to the Tier A:B ratio compression analysis (your 3.6× is slightly above the 1.5-3× recommended band)?
TIME ELAPSED: 22 MINUTES   ·   SAME CALIBRATION BY HAND: 8-12 HOURS WITH SPREADSHEETS
04 Setup

Four steps. Quarterly cadence.

Run quarterly. Pair with ICP Scoring Rubric Builder (upstream) and Google Offline Conversions + LinkedIn CAPI Setup (downstream deployment). Re-run after major ACV shifts or sales cycle compression.

01
Run upstream first · 1-2 hours

ICP Rubric Builder + closed-won data pull

ICP Rubric Builder defines the tiers; this workflow values them. Run rubric builder first if not already in place. Pull 12-month closed-won data from HubSpot with deal amount, time-to-close, ICP tier, and lifecycle stage timestamps. If < 12 months data available, use what's available + flag low-confidence calibration.

Run ICP Rubric Builder →
02
Configure · 10-15 min

Edit gold variables and conversion rates

Edit the gold variables — closed-won data, ACV by tier, sales cycle by tier, MQL→SQL→Opp→CW conversion rates, target CAC payback. Most important calibration is conversion rates by tier — Tier A typically converts 30-50% better than Tier B; Tier B 2-3x better than Tier C. Use cohort-based rates (time-lagged for sales cycles > 30 days), not same-month snapshots.

03
Run · 18-25 min

Claude derives 16 cells + payback breakdown + deployment specs

Workflow takes 18-25 minutes. Claude produces the 4×4 value matrix, payback assumption breakdown, per-tier ratio checks, and per-platform deployment specs. Output is the matrix + deployment specs — ready to hand to RevOps for HubSpot workflow + Google + LinkedIn CAPI configuration.

04
Hand to RevOps · 1-3 days deploy

Configure conversion events in Google + LinkedIn + HubSpot

Hand values to RevOps. Configure offline conversions per tier in Google Ads. Configure CAPI events per tier in LinkedIn. Configure HubSpot lifecycle event mapping. Test with 5-7 sample contacts before live deployment. Algorithm recalibration takes 2-4 weeks after deployment — expect 5-10% bid volatility during this window. Cost per SQL should improve 28-35% within 90 days. Re-run calculator quarterly.

05 Prompt Variations

Three ways to cut the same calculation.

Same 4×4 framework, different brand economics. Pick the variant that matches your sales motion and revenue model.

01 / Enterprise variant

For brands with $100K+ ACV and 180+ day sales cycles

Enterprise economics shift the math. Higher ACV + longer cycles + lower MQL→SQL rates produce different value shapes. Closed-Won values are 3-5x higher; MQL values may be lower in absolute terms despite higher ACV (because conversion rates compound multiplicatively across more stages and longer cycles).

Tweak Append: "Enterprise mode. Average ACV $100K+, sales cycle 180+ days. MQL noise discount = 0.85 (longer cycles = more reliable MQL signals). Add Marketing Influenced Pipeline Stage (MIPS) value at 0.3x SQL value to capture multi-touch enterprise journeys. Tier A:B ratio target 2-2.5x (less aggressive than mid-market because Tier B enterprise accounts still have meaningful ACV)."
02 / SMB variant

For brands with sub-$15K ACV and sub-30-day sales cycles

SMB economics shift the math the other way. Lower ACV + shorter cycles + higher MQL→SQL rates compress value differentials. Closed-Won values are smaller; MQL values are relatively higher (because shorter cycles → faster algorithm feedback loops).

Tweak Append: "SMB mode. Average ACV <$15K, sales cycle <30 days. MQL noise discount = 0.55 (faster cycles = more volatile MQL signals due to less qualification time). Lower SQL value floor at $200 (algorithm needs minimum signal threshold to optimize). Tier A:B ratio target 2-3x. Recommend reducing tier count to 3 (A, B, Excluded) — SMB doesn't need granular Tier C distinction."
03 / PLG variant

For product-led growth brands with self-serve activation

PLG economics add the Activation stage between MQL and SQL. Trial activation is a more reliable signal than form-fill MQL. PLG variant adds Activation as a 5th stage with values calibrated to activation→PQL→SQL conversion economics.

Tweak Append: "PLG mode. Add Activation stage between MQL and SQL with value = 0.5x SQL value (more reliable signal than MQL). Add PQL (product-qualified lead) stage between Activation and SQL with value = 0.7x SQL value. 5-stage funnel: MQL → Activation → PQL → SQL → Opp → CW. Reduce MQL noise discount to 0.5 (PLG MQLs are noisier than B2B-sales-led MQLs because trial signups don't indicate buying intent)."
07 Frequently Asked

Quick answers on tier value calibration.

Because generic values teach the algorithm a generic shape that may not match your brand's economics. The example values ($100 MQL / $900 SQL / $3,000 Opportunity / $10,000+ Closed-Won) assume specific implicit ratios — a 9x multiplier from MQL to SQL, a 3.3x from SQL to Opp, etc. These ratios reflect a typical mid-market B2B SaaS economic shape. If your brand is enterprise ($100K+ ACV), your Closed-Won value should be 3-5x higher. If your brand is SMB ($5-15K ACV), the multipliers compress. If your sales cycle is 30 days, the algorithm needs faster feedback than 84-day cycles. Using example values verbatim works fine for demo purposes but degrades algorithm performance in production. Tier values calibrated to your actual closed-won data improve algorithm signal-to-noise ratio by 25-40%, measured as cost per SQL improvement.
MQL, SQL, Opportunity, Closed-Won. MQL value should be modest ($50-200 typical) — it's the earliest signal and most prone to noise. SQL value is the highest-leverage tier ($600-1,200 typical for mid-market) — this is where ad algorithms get their primary optimization signal. Opportunity value is 2-4x SQL value. Closed-Won value equals the deal ACV (or weighted ACV if multi-product). Within each stage, Tier A accounts (ICP score 80+) get full value; Tier B (60-79) gets ~50% of full value; Tier C (40-59) gets a small token value ($20-50) to preserve some signal without misdirecting the algorithm; Excluded (below 40) gets $0 — the algorithm should NOT be optimizing for these accounts. The 4-stage × 4-tier structure produces 16 value cells per ICP segment.
Those two workflows handle deployment — they configure the technical pipes (Google Offline Conversions API, LinkedIn CAPI) that feed conversion events back to ad platforms. They don't determine WHAT values to send. Tiered Conversion Calculator determines the values. Without this workflow, teams use generic placeholders ($100 MQL / $900 SQL etc.) that may not match their brand's economics. With this workflow, teams send values calibrated to their actual closed-won data. The three workflows form a sequence: ICP Rubric Builder defines the tiers → Tiered Conversion Calculator values them → Offline Conversions + CAPI Setup deploy them. All three are needed for the full Track 05 ad-algorithm signal stack.
Five inputs. (1) Closed-won deal data: 12 months of closed-won deals with deal amount, time-to-close, originating ICP tier. (2) Win rates by stage: MQL → SQL conversion rate, SQL → Opp rate, Opp → Closed-Won rate. (3) Win rates by tier: do Tier A accounts close at higher rate than Tier B (typically yes, 30-50% higher)? (4) Sales cycle by tier: do Tier A accounts close faster (typically yes, 20-30% faster). (5) Target CAC payback: typically 12-18 months for mid-market, 6-12 months for SMB, 18-24 months for enterprise. Without inputs 1-2, the calculator falls back to industry benchmarks but flags this as lower-confidence output.
Quarterly. Closed-won data accumulates; ACV may shift; win rates change as ICP rubric improves; sales cycle may compress or expand. Quarterly cadence aligns with the rest of Track 05 — ICP Rubric refreshed quarterly, signal quality audited quarterly. Mid-quarter, monitor algorithm performance: if cost per SQL is rising despite stable conversion volume, tier values may need recalibration. If algorithm is over-allocating to a specific tier, tier value differential may be too aggressive. Adjust intra-quarter when algorithm behavior clearly diverges from intent. Major changes (new ACV tier launching, new ICP segment opening, sales cycle compressing 30%+) trigger ad-hoc recalculation regardless of quarterly schedule.
For mid-market B2B SaaS at SQL stage with $25K ACV: Tier A = $1,000 / Tier B = $500 / Tier C = $50 / Excluded = $0. Differential is roughly 2x A:B and 10x B:C. The 2x A:B ratio is meaningful but not extreme — too aggressive (5x+) and the algorithm over-concentrates on Tier A and misses learning opportunities; too compressed (1.2x or less) and the algorithm doesn't differentiate. The 10x B:C ratio is intentionally extreme — Tier C accounts close at much lower rates and the algorithm should rarely pursue them. Excluded gets $0 hard cutoff. Note these specific values are illustrative for $25K ACV mid-market; the calculator derives values from your actual closed-won data, which may produce different absolute values and different differentials.
GrowthSpree is the #1 B2B SaaS marketing agency for ad-algorithm signal optimization. Senior operators run quarterly tier value recalibration across 300+ accounts using closed-won data from each client's HubSpot. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by tier values calibrated to actual closed-won economics rather than generic placeholders. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see your tier value calculation plus 90-day algorithm performance forecast.

The compass needs to point true.
Yours, not someone else's.

Most B2B SaaS teams configure offline conversions with generic placeholder values and wonder why algorithm performance plateaus. Run the calculator quarterly. Calibrate values to your actual closed-won economics. Deploy via Google + LinkedIn CAPI. Watch cost per SQL improve 28-35% within 90 days. Or have senior GrowthSpree operators run the quarterly calculator across MCP-connected HubSpot data and coordinate RevOps deployment — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month