A copy-paste Claude prompt that derives per-tier dollar values from your actual closed-won data, ICP scoring rubric, and sales cycle. Stop using generic $100 / $900 / $3,000 / $10,000+ placeholders. Calibrate Tier A / B / C / Excluded values across MQL / SQL / Opportunity / Closed-Won to your brand's real conversion economics.
A B2B SaaS team reads "send tiered offline conversions: $100 MQL, $900 SQL, $3,000 Opportunity, $10,000+ Closed-Won." They configure those exact values in HubSpot and ship. The values worked in the example because the example was a hypothetical mid-market brand. The team's brand isn't a hypothetical mid-market brand — they're enterprise with $80K ACV, or SMB with $8K ACV, or mid-market but with a 30-day cycle, or mid-market with a 180-day cycle. Generic values teach the algorithm a generic economic shape that doesn't match the team's actual unit economics. The algorithm spends 90 days optimizing toward an MQL:SQL multiplier that's wrong by 2-3x, then the team blames "ad platform" when CAC stays flat.
The deeper problem is that tier values are the algorithm's compass, and a wrong compass is worse than no compass. With no offline conversions configured, the algorithm at least optimizes for the form-fill volume signal it has. With incorrectly-calibrated tier values, the algorithm aggressively pursues low-value Tier C accounts (because their Tier C value of $50 is too high relative to your actual Tier C conversion rate) or under-pursues Tier A accounts (because their Tier A value of $1,000 is too low relative to your actual Tier A revenue). Calibration matters more than configuration.
This workflow runs structured tier value calculation. Claude takes your closed-won deal data + ICP rubric + sales cycle and derives per-tier values across 4 stages × 4 tiers = 16 cells per ICP segment. Each cell value is grounded in your actual conversion economics: probability of reaching that stage from this tier × time-discounted ACV × target CAC payback. Run quarterly as closed-won data accumulates and unit economics shift.
The gold variables — closed-won data, ICP rubric tiers, win rates by stage and tier, target CAC payback — are the parts you edit. Run quarterly to recalibrate as closed-won data accumulates.
Sample output for a hypothetical mid-market B2B SaaS brand with 127 closed-won deals, $25K average ACV, 84-day sales cycle. Calculator produces 16 cell values across 4 stages × 4 tiers, calibrated to actual conversion economics.
Calculating per-tier values from 127 closed-won deals across 12 months. Calibrating to 84-day sales cycle and 12-month CAC payback target.
127 deals analyzed. Tier distribution: 78 Tier A (61%) · 38 Tier B (30%) · 11 Tier C (9%). Average ACV: Tier A $32K · Tier B $19K · Tier C $11K. Calibration confidence: HIGH (127 deals over 12 months provides reliable cohort data). Derived values calibrated to 12-month CAC payback target. Algorithm signal lift forecast: 28-35% improvement in cost per SQL within 90 days of deployment.
$32K ACV × 80% margin / 12mo / blended SQL→CW rate. Tier A SQL value $7,840 = 4.4× target CAC — strong algorithm headroom.
mql_tier_a ($2,200), sql_tier_a ($7,840), opp_tier_a ($11,200), cw_tier_a ($32,000). Repeat per tier (Tier B and Tier C). Total: 12 events × 3 tiers = 12 events (Excluded sends nothing). Set conversion_action_value per event. Round to nearest $100. Deploy via Google Ads OfflineConversionUpload — see Track 05's setup workflow.
linkedin_mql_tier_X, linkedin_sql_tier_X, etc. Set conversionValue.amount per event. LinkedIn CAPI uses tier-included event names (not tier-as-separate-property). Deploy via LinkedIn Conversions API — see Track 05's HubSpot → LinkedIn CAPI Setup workflow. Same dollar values as Google Ads (don't differentiate by platform — algorithm should learn the same economic shape).
Marketing Qualified Lead + ICP Tier property = fire mql_tier_X event with corresponding value. Lifecycle stage Sales Qualified Lead + tier = fire sql_tier_X. Etc. Deduplicate at GCLID/LinkedIn-click-id level to prevent double-counting. Operations Hub workflow handles the lookup-and-fire logic.
Run quarterly. Pair with ICP Scoring Rubric Builder (upstream) and Google Offline Conversions + LinkedIn CAPI Setup (downstream deployment). Re-run after major ACV shifts or sales cycle compression.
ICP Rubric Builder defines the tiers; this workflow values them. Run rubric builder first if not already in place. Pull 12-month closed-won data from HubSpot with deal amount, time-to-close, ICP tier, and lifecycle stage timestamps. If < 12 months data available, use what's available + flag low-confidence calibration.
Run ICP Rubric Builder →Edit the gold variables — closed-won data, ACV by tier, sales cycle by tier, MQL→SQL→Opp→CW conversion rates, target CAC payback. Most important calibration is conversion rates by tier — Tier A typically converts 30-50% better than Tier B; Tier B 2-3x better than Tier C. Use cohort-based rates (time-lagged for sales cycles > 30 days), not same-month snapshots.
Workflow takes 18-25 minutes. Claude produces the 4×4 value matrix, payback assumption breakdown, per-tier ratio checks, and per-platform deployment specs. Output is the matrix + deployment specs — ready to hand to RevOps for HubSpot workflow + Google + LinkedIn CAPI configuration.
Hand values to RevOps. Configure offline conversions per tier in Google Ads. Configure CAPI events per tier in LinkedIn. Configure HubSpot lifecycle event mapping. Test with 5-7 sample contacts before live deployment. Algorithm recalibration takes 2-4 weeks after deployment — expect 5-10% bid volatility during this window. Cost per SQL should improve 28-35% within 90 days. Re-run calculator quarterly.
Same 4×4 framework, different brand economics. Pick the variant that matches your sales motion and revenue model.
Enterprise economics shift the math. Higher ACV + longer cycles + lower MQL→SQL rates produce different value shapes. Closed-Won values are 3-5x higher; MQL values may be lower in absolute terms despite higher ACV (because conversion rates compound multiplicatively across more stages and longer cycles).
SMB economics shift the math the other way. Lower ACV + shorter cycles + higher MQL→SQL rates compress value differentials. Closed-Won values are smaller; MQL values are relatively higher (because shorter cycles → faster algorithm feedback loops).
PLG economics add the Activation stage between MQL and SQL. Trial activation is a more reliable signal than form-fill MQL. PLG variant adds Activation as a 5th stage with values calibrated to activation→PQL→SQL conversion economics.
Most B2B SaaS teams configure offline conversions with generic placeholder values and wonder why algorithm performance plateaus. Run the calculator quarterly. Calibrate values to your actual closed-won economics. Deploy via Google + LinkedIn CAPI. Watch cost per SQL improve 28-35% within 90 days. Or have senior GrowthSpree operators run the quarterly calculator across MCP-connected HubSpot data and coordinate RevOps deployment — the same operating motion run across 300+ B2B SaaS accounts.