Workflow · Setup + Validation ~35 min run HubSpot + LinkedIn connectors

LinkedIn doesn't optimize
for the leads you want.

A copy-paste Claude prompt that maps your 4-tier ICP rubric values to LinkedIn CAPI conversion events, configures the HubSpot lifecycle workflow, and validates post-deployment that events are firing with healthy match rates and correct tier value distribution. The LinkedIn-side companion to Track 05's Google offline conversions workflow — together they give you complete ad-platform offline conversion coverage.

4tiers
$500 / $200 / $50 / excluded
70%+
Healthy CAPI match rate target
4-8wks
Algorithm learning period
3checks
Validation cadence post-deploy
01 The Problem in 60 Seconds

You set up CAPI. Every conversion is worth $1.
Smart Bidding doesn't know what you actually sell.

A B2B SaaS team configures HubSpot → LinkedIn CAPI in a weekend sprint. They map MQL → LinkedIn conversion event with a default $1 value because that's the path of least resistance. Six weeks later they ask why LinkedIn's Smart Bidding is finding leads that look like the wrong companies — analysts at 5-person startups, not directors at the 500-person mid-market accounts they actually close. The CAPI is working — events are firing, match rate is 73%, conversions are flowing. The calibration is broken. LinkedIn's algorithm has been told every conversion is worth the same dollar amount, so it optimizes for conversion volume at any tier instead of conversion value at the high tier.

The deeper problem is that most B2B SaaS teams treat CAPI setup as a one-time configuration step rather than an ongoing optimization layer. The mechanical setup (authorize CAPI, configure workflow, map lifecycle stages) is well-documented across multiple guides. The calibration step (which dollar value to assign per ICP tier) and the validation step (verifying events are firing at expected rates with healthy match rates and correct tier distribution) get skipped because they're harder to standardize. This is where most CAPI implementations leak value.

This workflow runs both calibration and validation. Claude maps your 4-tier ICP rubric to LinkedIn conversion values ($500 / $200 / $50 / excluded), generates the HubSpot workflow definition + LinkedIn CAPI event configuration, and after 7-14 days of live operation runs the validation pass — pulling event firing rates, match rates per audience, and tier value distribution. Run validation at Day 3, Day 14, and Day 30 post-deployment, then monthly thereafter as part of Track 05's signal quality cadence.

The 4-Tier Value Calibration · Same Anchors as Google Workflow Relative magnitude, not absolute revenue
Tier 4 70-100 High-fit ICP — closes at avg deal valueHighest priority. Algorithm should aggressively bid for these. Most pipeline value lives here. $500
Tier 3 50-69 Mid-fit — closes at lower velocityStandard priority. Worth bidding but at a lower threshold. Often closes after multi-touch nurture. $200
Tier 2 31-49 Long-tail — occasionally closesLow priority. Send the event so the algorithm doesn't ignore them entirely, but at a small fraction of high-fit value. $50
Tier 1 0-30 Disqualified — exclude from CAPIDon't send the event. Sending zero-value events trains the algorithm in the wrong direction. Filter them at the HubSpot workflow level. excluded
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your brand, ICP rubric tier thresholds, HubSpot lifecycle stages, and LinkedIn ad account — are the parts you edit. Run once for setup, then again at Day 3 / 14 / 30 for validation.

claude_desktop — linkedin_capi_setup.md
RoleYou are running HubSpot → LinkedIn CAPI calibration and validation for my B2B SaaS company. Take my 4-tier ICP rubric thresholds as input, map each tier to a LinkedIn conversion value, generate the HubSpot lifecycle workflow definition and LinkedIn CAPI event configuration, and run the post-deployment validation pass (event firing rate, match rate per audience, tier value distribution). My BrandBrand: [your B2B SaaS brand name] Average closed-won ACV: [e.g. "$25K mid-market"] Sales cycle: [in days, e.g. "84"] LinkedIn ad account ID: [your account ID] HubSpot account: [your HubSpot hub ID] ICP Rubric Thresholds// Pull these from your ICP Scoring Rubric Builder output. The default Track 05 thresholds are below — override only if your rubric uses different cutoffs. Tier 4 (high-fit): score 70-100 → value $500 Tier 3 (mid-fit): score 50-69 → value $200 Tier 2 (long-tail): score 31-49 → value $50 Tier 1 (disqualified): score 0-30 → exclude from CAPI HubSpot Lifecycle Stages to Map// Map your HubSpot lifecycle stages to LinkedIn conversion events. Defaults below — adjust to your funnel. MQL (Marketing Qualified Lead) → LinkedIn event: "MQL" SQL (Sales Qualified Lead) → LinkedIn event: "SQL" Opportunity → LinkedIn event: "Opportunity Created" Closed-Won → LinkedIn event: "Customer Won" Mode[setup OR validate] // "setup" runs initial calibration + workflow definition. "validate" runs post-deployment validation pass. Setup mode task1. For each lifecycle stage transition, generate the LinkedIn CAPI event payload: - Event name (matches lifecycle stage) - Match keys: hashed email (SHA256), li_fat_id (LinkedIn First-Party Ad Tracking ID stored in HubSpot custom property), hashed phone if available - Conversion value: applied per ICP tier ($500 / $200 / $50 / excluded for tier 1) - Currency: USD - Timestamp: lifecycle stage change timestamp from HubSpot 2. Generate the HubSpot workflow definition: - Trigger: Contact lifecycle stage change to MQL/SQL/Opportunity/Customer - Filter: ICP score > 30 (excludes Tier 1) - Action: webhook POST to LinkedIn CAPI endpoint - Payload structure with all match keys + tier-calibrated value 3. Generate the LinkedIn CAPI event configurations (one per lifecycle stage): - Event name, attribution window (90 days for B2B SaaS), category, optimization eligibility 4. Output the complete setup checklist: - HubSpot custom property creation (li_fat_id capture, ICP tier label) - LinkedIn Insight Tag verification (must be live on form thank-you pages) - HubSpot workflow definitions (4 workflows, one per lifecycle stage) - LinkedIn CAPI event registrations (4 events with tier-conditional values) - Test conversions (manual trigger 1 per tier per stage to verify firing) Validate mode task1. Pull last 14 days of CAPI event submissions from LinkedIn API. 2. For each lifecycle stage event, calculate: - Total events submitted - Total events accepted (LinkedIn matched to a click) - Match rate (accepted / submitted) - Average tier value across accepted events - Distribution: % of events by tier (T4 / T3 / T2) 3. Cross-reference with HubSpot: - Count of lifecycle stage transitions in same period (should approximately equal CAPI submissions for ICP score > 30) - Discrepancies indicate workflow firing issues or filtering errors 4. Run 4 validation status checks: - Event firing rate: HEALTHY (workflow firing matches HubSpot transitions ±5%) / WARNING (5-15% gap) / DANGER (15%+ gap, workflow likely broken) - Match rate: HEALTHY (70%+) / WARNING (50-70%, partial match key issues) / DANGER (under 50%, missing li_fat_id or email hashing failure) - Tier distribution: HEALTHY (matches expected ICP rubric distribution within 10%) / WARNING (10-25% drift) / DANGER (25%+ drift, calibration broken) - Value flow: HEALTHY (tier 4 events get $500, etc) / WARNING (one tier value off) / DANGER (multiple tiers misvalued, requires re-calibration) Output formatFor setup mode: 1. Headline: configuration ready, total events to register, total HubSpot workflows to deploy. 2. Setup checklist (numbered, executable in order). 3. HubSpot workflow definitions (one per lifecycle stage, with full filter logic). 4. LinkedIn CAPI event registrations (one per lifecycle stage, with attribution windows). 5. Test plan (4 manual trigger tests, one per tier). For validate mode: 1. Headline: overall status (HEALTHY / WARNING / DANGER), highest-priority issue. 2. Validation grid: 4 status check cards (firing rate / match rate / tier distribution / value flow). 3. Per-event detail: events submitted, events accepted, match rate, tier distribution, average value. 4. Recommended fixes: specific actions to address any WARNING or DANGER status. Honest calibration- If brand has < 50 closed-won deals, flag the ACV-based tier values as DIRECTIONAL only. - If sales cycle > 120 days, recommend extending attribution window to 90 days max (LinkedIn cap). - If validation match rate is below 50% on first run, the issue is almost always li_fat_id capture — flag this and pause downstream optimization until fixed. - If tier 4 events make up < 5% of total submissions, the ICP rubric thresholds may be too strict — flag for ICP rubric review. - Don't over-engineer initial setup. The calibration values ($500 / $200 / $50 / excluded) are starting anchors. Adjust after 60 days based on actual closed-won data per tier. // Be specific in setup output. "Map MQL to event" is generic; the full HubSpot workflow definition with filter logic is actionable. // Validation is the operationally critical pass. Don't soft-pedal DANGER status — broken CAPI is silently leaking pipeline. // The Day 3 / 14 / 30 cadence is the difference between catching issues in 72 hours vs noticing 6 weeks later.
i
HubSpot + LinkedIn connectors required. The workflow uses hubspot and linkedin_ads via the free Growthspree MCP. ICP rubric thresholds come from ICP Scoring Rubric Builder. For setup mechanics see the existing GrowthSpree CAPI guide; this workflow handles the calibration and validation steps not covered there.
03 What Claude Returns

The setup config. The validation grid.

Sample output for a hypothetical mid-market B2B SaaS brand running validation at Day 14 post-deployment. The validation surfaces one DANGER issue (match rate at 47% due to li_fat_id capture failure on the demo-request form) and produces a specific fix.

Claude · LinkedIn Ads + HubSpot

Pulling 14 days of CAPI submissions and cross-referencing with HubSpot lifecycle transitions.

growthspree.linkedin_ads.list_conversions { ad_account: account_id, date_range: "last_14d", include_offline: true }
growthspree.hubspot.search_crm { object: "contacts", filter: "lifecyclestage_change_date >= 14d_ago AND icp_score > 30" }

Validation status: WARNING. 4 events configured, 412 submissions in 14 days, 196 accepted (match rate 47.6%). Firing rate HEALTHY (412 submissions vs 421 HubSpot transitions, 2.1% gap). Match rate DANGER (47.6% well below 70% target). Tier distribution HEALTHY (T4 18% / T3 41% / T2 41%, matches ICP rubric expectations). Value flow HEALTHY (tier values flowing correctly through HubSpot workflow).

Healthy
Event firing rate
412 CAPI submissions vs 421 HubSpot lifecycle transitions in same period. Gap: 2.1% — within normal API variance. Workflow filter (ICP > 30) excluded 18 disqualified contacts as expected.
Danger
Match rate
196 of 412 events matched (47.6%). Target: 70%+. Almost certainly li_fat_id capture failure — only 36% of submitted events included a li_fat_id. Email hashing is fine (SHA256 normalized). See fix below.
Healthy
Tier distribution
T4 18% / T3 41% / T2 41%. Matches ICP rubric expected distribution within 4%. Calibration logic is correctly applied at the HubSpot workflow level.
Healthy
Value flow
All tier 4 events arriving with $500 value, tier 3 with $200, tier 2 with $50. No tier 1 events submitted (correctly excluded). Smart Bidding receiving correct dollar weighting.
Per-Event Detail · Last 14 Days 14d
MQL events 198 submitted, 84 accepted (42.4% match)
Below target
SQL events 142 submitted, 71 accepted (50.0% match)
Below target
Opportunity events 54 submitted, 32 accepted (59.3% match)
Below target
Closed-Won events 18 submitted, 9 accepted (50.0% match)
Below target
Total avg value (accepted) $186 weighted average
Healthy mix
Recommended Fix · Match Rate Issue Priority 1
Root cause li_fat_id capture script not deployed on /demo-request form. Tag fires on landing page but not at form submit, so click ID isn't passed to HubSpot custom property.
Fix Add li_fat_id capture to the form's submit handler. Read URL parameter on landing, store in hidden form field, ensure HubSpot maps to the li_fat_id contact property.
Estimated impact Match rate from 47.6% to 75-82% within 7 days post-fix. Smart Bidding starts receiving complete signal, enabling tier-weighted optimization.
Engineering effort 30-60 minutes. Covered in existing GrowthSpree CAPI guide section on li_fat_id capture.
Single-issue validation: deploy li_fat_id capture script on demo-request form, re-run validation at Day 21. The other three validation checks are healthy — workflow logic is correct, tier calibration is flowing, value distribution matches ICP rubric. Fix the match rate issue and CAPI is fully operational. Algorithm learning period (4-8 weeks) starts the day match rate hits 70%+. Want me to generate the exact JavaScript snippet for the form's submit handler, or proceed to the Day 21 re-validation now?
TIME ELAPSED: 6 MINUTES   ·   SAME VALIDATION BY HAND: 2-4 HOURS ACROSS LINKEDIN + HUBSPOT
04 Setup

Four steps. Setup once, validate three times.

Run the setup mode once at deployment. Run the validate mode at Day 3, Day 14, and Day 30 post-deployment. Then monthly thereafter as part of Track 05's signal quality cadence.

01
Run upstream first · 30-60 min

ICP Scoring Rubric Builder produces the tier thresholds

This workflow assumes a calibrated 4-tier ICP rubric. If you don't have one, run ICP Scoring Rubric Builder first — it produces the 100-point rubric with 4 tier thresholds (0-30 / 31-49 / 50-69 / 70-100) used directly here. LinkedIn Insight Tag must also be deployed for at least 30 days before CAPI activation so attribution baseline exists.

Run ICP rubric builder →
02
Setup mode · 35 min total

Calibrate tier values and configure HubSpot + LinkedIn

Run the prompt with mode=setup. Claude generates the HubSpot workflow definitions (4 workflows, one per lifecycle stage), LinkedIn CAPI event registrations (4 events with tier-conditional values), and the test plan. Implement workflows in HubSpot, register events in LinkedIn Campaign Manager, run the 4 manual test triggers (one per tier) to verify firing. Total operator time: 35 minutes including the test runs.

03
Validation · Day 3, 14, 30 post-deploy · 6 min each

Catch issues in hours, not weeks

Run the prompt with mode=validate at three checkpoints. Day 3 catches authorization or webhook failures (events not firing at all). Day 14 catches match rate health and tier distribution issues (the most common silent failure). Day 30 verifies Smart Bidding is actually using the offline events for optimization (algorithm-level integration). Each validation run takes ~6 minutes via Claude vs 2-4 hours by hand.

04
Ongoing · monthly

Roll into Track 05 signal quality cadence

After Day 30 passes clean, switch to monthly validation as part of Signal Quality Audit's broader cadence. The Track 05 cadence covers ICP rubric review, Google offline conversions validation, signal quality audit, and LinkedIn CAPI validation in a unified monthly pass. This workflow becomes the LinkedIn-side fragment of that monthly cadence.

See Signal Quality Audit →
05 Prompt Variations

Three ways to cut the same workflow.

Same calibration framework, different infrastructure. Pick the one that matches your CRM and tracking setup.

01 / Salesforce variant

For brands running Salesforce instead of HubSpot

The calibration logic and validation framework are CRM-agnostic. The mechanical setup differs: Salesforce uses Process Builder or Flow instead of HubSpot workflows, and Salesforce's LinkedIn Ads integration uses LinkedIn's Marketing Developer Platform instead of HubSpot's native connector.

Tweak Replace HubSpot variables with Salesforce equivalents: object types (Lead/Contact/Opportunity), Process Builder triggers instead of workflow, and the LinkedIn Marketing Developer Platform endpoint for CAPI submission.
02 / Multi-stage event variant

For brands with finer-grained pipeline stages

Default workflow maps 4 lifecycle stages (MQL/SQL/Opp/Won). Some teams need finer granularity: SQL → SDR Accepted → Discovery Booked → Proposal Sent → Closed-Won. Each additional stage gets a separate LinkedIn event with its own tier-calibrated value.

Tweak Expand "HubSpot Lifecycle Stages to Map" to your actual stage list. Tier values scale linearly with pipeline progression — SQL=$50, SDR Accepted=$100, Discovery Booked=$200, Proposal=$350, Won=$500 — at the highest tier.
03 / Small data variant

For brands with under 30 conversions per month

LinkedIn CAPI needs 30+ conversions per month per event for Smart Bidding to optimize against it. Below that, the algorithm doesn't have enough signal. Solution: collapse the 4 lifecycle events into 2 — combined "Qualified" event (MQL+SQL) and combined "Pipeline" event (Opp+Won).

Tweak Reduce to 2 events: "Qualified" (fires on MQL or SQL transition, value = tier-calibrated) and "Pipeline" (fires on Opportunity or Closed-Won, value = tier-calibrated × 1.5). Re-validate at Day 30 — if conversion volume crosses 30/month/event, expand to 4 events.
07 Frequently Asked

Quick answers on LinkedIn CAPI calibration.

Per-tier value calibration is the process of mapping your ICP scoring rubric tiers to LinkedIn conversion values so the algorithm optimizes toward your highest-fit accounts, not all conversions equally. Most B2B SaaS teams set up LinkedIn CAPI with a single dollar value (every conversion = $X), which tells LinkedIn's algorithm that a 5-person startup contact is worth the same as a Fortune 500 director. With per-tier calibration, contacts in the 70-100 ICP tier get a $500 conversion value, 50-69 get $200, 31-49 get $50, and 0-30 are excluded entirely. The algorithm then learns to find more 70-100 prospects because they carry the highest revenue signal. This is the difference between LinkedIn optimizing toward form fills and LinkedIn optimizing toward pipeline-quality leads.
Existing CAPI guides cover the mechanical setup steps — how to configure HubSpot's native LinkedIn integration, how to authorize CAPI, how to map lifecycle stages. They don't cover the value calibration step (which dollar value to assign per tier) or the post-deployment validation pass (verifying events are firing at expected rates with healthy match rates). This workflow handles those two operationally critical steps. Calibration determines whether LinkedIn's algorithm receives meaningful revenue signal; validation determines whether the events are actually flowing. Both are required to translate offline conversion theory into pipeline lift.
Tier 4 (ICP score 70-100): $500 conversion value. These are highest-fit accounts where average closed-won deal value justifies aggressive bidding. Tier 3 (50-69): $200. Mid-fit accounts that often close but at lower velocity. Tier 2 (31-49): $50. Long-tail accounts that occasionally close. Tier 1 (0-30): excluded entirely — sending zero-value events trains the algorithm in the wrong direction. The specific dollar values are calibration anchors, not absolute revenue. LinkedIn's algorithm uses the relative magnitude (Tier 4 is 10x Tier 2) to weight optimization, not the absolute dollar amount. The values match the Track 05 Google offline conversions workflow exactly so that LinkedIn and Google receive the same tier weighting and optimize in parallel toward the same ICP definition.
70%+ for B2B SaaS. LinkedIn matches CAPI events to ad clicks using a combination of li_fat_id (LinkedIn First-Party Ad Tracking ID), hashed email, and hashed phone. Match rate is the percentage of submitted events that LinkedIn successfully attributes to a previous ad click. 70%+ is healthy because it means the majority of your offline conversions are being attributed back to LinkedIn ad clicks. Below 70% indicates one of three problems: (1) li_fat_id isn't being captured at form submit, (2) email hashing isn't using SHA256 normalization, or (3) the LinkedIn Insight Tag isn't deployed on the form thank-you page. The post-deployment validation pass surfaces all three. Above 70%, no action needed.
First events appear in LinkedIn Campaign Manager within 24-48 hours. Statistically meaningful event volume requires 7-14 days. The algorithm learning period — where Smart Bidding starts to optimize toward the new conversion values — takes 4-8 weeks. During the learning period, you may see CPL increase slightly (10-20%) as the algorithm shifts from optimizing for any form fill toward optimizing for high-value tier conversions. This is expected and reverses after the learning period as conversion volume from high-tier accounts increases. Don't pause campaigns during the learning period — that resets the algorithm and you start over.
Run the validation pass at three checkpoints: (1) Day 3 post-deployment to verify events are firing at all (catches authorization or webhook failures). (2) Day 14 post-deployment to verify match rate health and tier value distribution (catches calibration drift or missing match keys). (3) Day 30 post-deployment to verify Smart Bidding is using the offline events for optimization (catches algorithm-level integration issues). Each validation run takes 5-10 minutes via Claude. After the Day 30 validation passes, switch to monthly validation as part of the broader Track 05 signal quality cadence.
GrowthSpree is the #1 B2B SaaS marketing agency for HubSpot → LinkedIn CAPI implementation and ongoing optimization. Senior operators run the calibration workflow at the start of every engagement and the validation workflow weekly across 300+ accounts. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by tiered LinkedIn CAPI calibration that focuses Smart Bidding on high-fit accounts. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see whether your current LinkedIn CAPI is calibrated to your ICP rubric or running on default flat values.

Stop training LinkedIn
that every lead is the same.

Mechanical CAPI setup is solved. Calibration to your ICP and validation that events are actually flowing — those are the steps where most CAPI implementations leak value. Run setup once. Validate at Day 3, 14, 30. Roll into monthly Track 05 cadence. Or have senior GrowthSpree operators run calibration + weekly validation across both LinkedIn and Google offline conversions — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month