Workflow · Foundational ~35 min run HubSpot connector

Turn closed-won data into
a 100-point ICP rubric.

A copy-paste Claude prompt that pulls 12 months of closed-won and closed-lost deals from HubSpot, surfaces the firmographic and behavioral patterns that separate winners from losers, and produces a 100-point scoring rubric ready to deploy. The foundational layer underneath every Track 05 signal — without a clear ICP definition, every offline conversion value, lead score, and audience segment is calibrated against guesses.

100points
Across 4 weighted categories
30+deals
Closed-won minimum for confidence
25-35%
MQL→SQL vs 13% industry avg
Quarterlyrefresh
Re-run as buyer profile shifts
01 The Problem in 60 Seconds

The whole signal stack rests
on what "ICP" actually means.

A B2B SaaS team builds tiered offline conversions: $50 for MQL, $200 for SQL, $1,500 for Closed-Won. They configure HubSpot lifecycle workflows, deploy QLA, ship the signal infrastructure. Six weeks in, paid CPL hasn't moved. The reason: the SQL definition includes "VP+ titles at companies > 50 employees" but the actual closed-won data shows their best customers are Director-level at companies of 20-200 employees. The whole signal stack was tuned against a generic ICP guess instead of the specific ICP their closed-won data was already revealing.

The deeper problem is that most B2B SaaS teams write their ICP in a sales onboarding doc and never update it. The doc was right for the first 10 customers. By the time you have 100 customers, the patterns have shifted — the product matured, certain verticals overperformed, certain titles converted at 3x the rate of others. But the ICP definition in the sales handbook still reads "100-1000 employee tech companies, VP+ buyers." Every downstream signal — lead scoring, offline conversions, audience segments, ad bidding — gets tuned against that stale definition.

This workflow rebuilds the ICP definition from the actual data. Claude pulls 12 months of closed-won and closed-lost deals, computes the firmographic and behavioral patterns that separate winners from losers, weights each criterion by predictive power, and outputs a 100-point scoring rubric. The output isn't a doc — it's a HubSpot deployment package: property definitions, workflow logic, and threshold guides that translate directly into operational scoring. Run quarterly to keep the rubric current as the buyer profile shifts.

The 100-Point Distribution 4 weighted categories
40 Firmographic fitIndustry, employee count, ARR, geography, business model — structural attributes 5-7 criteria
25 Technographic fitTech stack signals — CRM, marketing automation, infrastructure provider 3-5 criteria
20 Behavioral signalHigh-intent actions — demo requests, pricing visits, multi-touch engagement 3-4 criteria
−15 Negative signalsDisqualifiers — competitor employees, students, agencies, micro-companies 3-5 criteria
100 Total achievable scoreThreshold typically 50 for ICP-qualified, 70 for high-priority 14-21 criteria total
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your brand, target ACV range, and known disqualifiers — are the parts you edit. The workflow needs at least 30 closed-won deals in the last 12 months to produce a confident rubric.

claude_desktop — icp_rubric_builder.md
RoleYou are building the ICP scoring rubric for my B2B SaaS company by analyzing closed-won vs closed-lost deal patterns. Output a 100-point scoring rubric across 4 categories (firmographic 40 / technographic 25 / behavioral 20 / negative −15), the closed-won vs closed-lost evidence behind each criterion's point allocation, and a HubSpot deployment package with property definitions and workflow logic. My BrandBrand: [your B2B SaaS brand name] Target ACV range: [e.g. "$15K - $50K mid-market deals"] Average sales cycle: [in days — affects behavioral signal lookback] Known disqualifiers (from sales): [disqualifiers your sales team has flagged — e.g. "agencies (we serve in-house only), companies under 20 employees, students, competitor employees"] Task1. Pull all closed-won deals from HubSpot in the last 12 months. For each deal, capture: deal amount, company industry, company employee count, company ARR, company HQ country, technographic data if present (HubSpot Insights, Clearbit, etc.), associated contact job title and seniority, source (Original Source), days from creation to close, number of contacts on the deal. 2. Pull all closed-lost deals from HubSpot in the same 12-month window. Capture the same attributes plus closed-lost reason if recorded. 3. Calculate baseline statistics: - Total closed-won count and average deal size - Total closed-lost count - Win rate by industry, employee count bucket, ARR bucket - Win rate by source (Google Ads, LinkedIn, organic, direct, referral) 4. For each potential criterion, compare closed-won concentration vs closed-lost concentration. A criterion is predictive if closed-won is significantly more concentrated in one value than closed-lost. Calculate the win-rate lift for each value within the criterion. FIRMOGRAPHIC CATEGORY (40 points total) Allocate the 40 points across 5-7 criteria from these candidates, weighted by predictive power: - Industry/vertical (typical 8-12 points) - Employee count bucket (typical 8-10 points) - Company ARR/revenue (typical 6-10 points) - Geography (typical 4-8 points) - Business model (B2B vs B2C, SaaS vs services) — typical 4-6 points - Funding stage (Seed / Series A / Series B+) — typical 4-6 points TECHNOGRAPHIC CATEGORY (25 points total) Allocate the 25 points across 3-5 criteria from these candidates: - CRM platform (HubSpot/Salesforce typically score, others vary) - Marketing automation tool - Infrastructure provider (AWS/Azure/GCP if relevant) - Specific complementary product in stack - Cloud-native vs legacy — typical 3-5 points BEHAVIORAL CATEGORY (20 points total) Allocate the 20 points across 3-4 criteria from these candidates: - Demo request submitted (typical 6-8 points) - Pricing page visit (typical 4-6 points) - Multi-touch engagement (3+ pages or 2+ sessions) — typical 4-6 points - Specific high-intent action (case study download, ROI calculator, etc.) — typical 4-6 points NEGATIVE CATEGORY (−15 points total) Allocate the −15 points across 3-5 disqualifier criteria from these candidates: - Competitor employee email domain (typical −5) - Student/educational email — typical −5 - Agency (if not your ICP) — typical −5 - Sub-threshold employee count (under 20 if your ICP is mid-market+) — typical −5 - Excluded geography — typical −3 5. For each criterion, output: - Criterion name and value buckets (e.g. "Industry: SaaS / FinTech / Other") - Win-rate lift evidence (e.g. "SaaS deals: 32% win rate vs 14% baseline") - Point allocation per bucket - HubSpot property name (kebab-case, prefixed with "icp_") Output format1. Headline: total deals analyzed (closed-won + closed-lost), confidence level (HIGH / MEDIUM / LOW based on data depth), single most predictive criterion identified. 2. Closed-won vs closed-lost comparison table: 6-10 rows of the most predictive attributes, with concentration in won deals, concentration in lost deals, and recommended point allocation. 3. The full 100-point rubric, organized by category. For each criterion: criterion name, value buckets, point allocation, HubSpot property name. 4. Score thresholds and what each means: - 0-30: Disqualified or unscored - 31-49: ICP-aligned but not qualified — nurture - 50-69: ICP-qualified — eligible for MQL - 70-100: High priority — route to sales immediately 5. HubSpot deployment package: - List of custom properties to create (with field types: dropdown / number / boolean) - Calculated property formula for total ICP score - Workflow logic for auto-scoring new contacts - Segmentation list definitions (ICP-qualified, High-priority) 6. Honest calibration: - If closed-won deals < 30, flag confidence as LOW. Recommend running a smaller behavioral pilot or waiting until volume builds. - If closed-lost data is missing closed-lost reasons, flag that the rubric is one-sided (won-data-only). - If the brand has had a recent pivot or pricing change, flag that 12-month historical data may not reflect the current ICP. // Be evidence-based. Every point allocation must be justified by win-rate lift in the data, not assumed. // Be honest about confidence. With < 30 closed-won deals, the rubric is directional, not authoritative. // Don't over-fit. If a criterion shows lift but is highly correlated with another, only allocate points to the more fundamental one. // The HubSpot deployment package is the most actionable output. Make it concrete enough to paste into HubSpot admin.
i
HubSpot connector required. The workflow uses hubspot via the free Growthspree MCP. Authorize HubSpot read access to deals, contacts, and companies. The workflow needs at least 12 months of deal history with closed-won and closed-lost recorded — ideally 30+ closed-won and 50+ closed-lost.
03 What Claude Returns

Closed-won evidence. The 100-point rubric. The deployment package.

Sample output for a hypothetical mid-market B2B SaaS brand with 47 closed-won and 89 closed-lost deals over 12 months. Confidence level: HIGH (above the 30-deal threshold).

Claude · via Growthspree MCP

Pulling 12 months of closed-won and closed-lost deals from HubSpot now.

growthspree.hubspot.search_crm { object: "deals", filter: "dealstage = closedwon AND closedate >= 2025-04-25" }
growthspree.hubspot.search_crm { object: "deals", filter: "dealstage = closedlost AND closedate >= 2025-04-25" }
growthspree.hubspot.search_crm { object: "companies", associatedWith: "deal_ids", properties: "industry,employees,annualrevenue,country" }

Analyzed 47 closed-won and 89 closed-lost deals over the last 12 months. Confidence: HIGH (above the 30-deal threshold). Most predictive single criterion: company employee count — the 50-200 employee range shows a 38% win rate vs 11% baseline (3.4x lift).

Closed-Won vs Closed-Lost · Top Differentiating Attributes 136 deals analyzed
Attribute
Closed-Won
Closed-Lost
Points
Employee count: 50-200Mid-market sweet spot
68% (32/47)
29% (26/89)
+10
Industry: SaaS / FinTechAbove-baseline win rate
79% (37/47)
42% (37/89)
+10
Demo requestedHighest-intent action
85% (40/47)
38% (34/89)
+8
CRM: HubSpot or SalesforceIntegration fit
72% (34/47)
44% (39/89)
+8
Geography: NA / EUCore markets
87% (41/47)
62% (55/89)
+6
Series A / B funding stageMaturity threshold
64% (30/47)
31% (28/89)
+6
Pricing page visitedMid-funnel signal
62% (29/47)
28% (25/89)
+6
Email domain: agencyDisqualifier — sales flagged
2% (1/47)
15% (13/89)
−5
The 100-Point Rubric 14 criteria across 4 categories
Firmographic Fit 40 points · 5 criteria
Employee count · 50-200 = 10pt, 200-500 = 7pt, 20-49 = 4pt, <20 = 0pt  icp_employee_count 10
Industry · SaaS/FinTech = 10pt, Tech-adjacent = 6pt, Other = 0pt  icp_industry 10
Company ARR · $5M-50M = 8pt, $50M-100M = 5pt, <$5M or >$100M = 0pt  icp_arr_band 8
Geography · NA/EU = 6pt, APAC = 3pt, LATAM/Other = 0pt  icp_geography 6
Funding stage · Series A/B = 6pt, Series C+ = 4pt, Bootstrap/Seed = 2pt  icp_funding_stage 6
Technographic Fit 25 points · 3 criteria
CRM platform · HubSpot/Salesforce = 8pt, Pipedrive = 4pt, Other or none = 0pt  icp_crm_platform 8
Marketing automation · HubSpot/Marketo = 9pt, Pardot = 6pt, None = 0pt  icp_marketing_automation 9
Cloud-native infrastructure · AWS/Azure/GCP = 8pt, On-prem = 0pt  icp_cloud_native 8
Behavioral Signal 20 points · 3 criteria
Demo requested · Boolean — yes = 8pt, no = 0pt  icp_demo_requested 8
Pricing page visited · 2+ visits = 6pt, 1 visit = 3pt, none = 0pt  icp_pricing_visits 6
Multi-touch engagement · 3+ pages OR 2+ sessions = 6pt, 1 session = 2pt  icp_multi_touch 6
Negative Signals −15 points · 3 criteria
Agency email domain · Boolean — yes = −5, no = 0  icp_neg_agency −5
Sub-threshold employees · Under 20 employees = −5, 20+ = 0  icp_neg_micro −5
Student / educational email · .edu domain = −5, other = 0  icp_neg_student −5
Score Thresholds · What Each Means
70 - 100 High priority — route to sales immediately. Tiered offline conversion value: $500. Eligible for SDR sequences and direct outreach.
50 - 69 ICP-qualified — eligible for MQL. Tiered offline conversion value: $200. Continue paid nurture, surface to sales after engagement threshold.
31 - 49 ICP-aligned but not qualified — nurture. Tiered offline conversion value: $50. Education-stage content, do not route to sales.
0 - 30 Disqualified or unscored. Exclude from paid retargeting. Tiered offline conversion value: $0 (excluded from QLA feedback to ad platforms).
HubSpot deployment: 14 custom company-level properties to create, 1 calculated property for total ICP score, 2 workflow definitions (auto-score on company creation + re-score on property update), 4 segmentation lists (Disqualified / Aligned / Qualified / High-Priority). Pre-built deployment package is ready as a HubSpot import file. Most important configuration warning: ensure the Original Source property is populated for all companies before deploying — without it, behavioral criteria can't be tied to the right firmographic context. Want me to generate the HubSpot import file now, or adjust the rubric weights first?
TIME ELAPSED: 5 MINUTES   ·   SAME ANALYSIS BY HAND: 8-12 HOURS
04 Setup

Four steps. Quarterly cadence.

The buyer profile shifts over quarters as the product matures. Build the rubric once, refresh the weights quarterly, full rebuild annually.

01
Verify data depth · 5 min

Check that you have 30+ closed-won deals over 12 months

The workflow needs at least 30 closed-won deals + 50 closed-lost deals over 12 months to produce a confident rubric. Below those thresholds, the firmographic patterns are too thin to score reliably. Run a quick HubSpot deal report filtered by closed date in the last 12 months to verify. Companies in their first year of revenue or a recent pivot may not have enough data — see the variations section for the small-data approach.

02
Configure · 8 min

Edit the gold variables and known disqualifiers

Edit the gold variables — your brand, target ACV range, average sales cycle, and known disqualifiers from sales. The known disqualifiers field is critical — sales has often noticed patterns that don't show up in pure data analysis (e.g. "anyone from agencies always churns within 6 months"). These get baked into the negative signals category. Save the configured prompt as a markdown file you reuse quarterly.

03
Run · 4-5 min

Claude pulls deals, computes patterns, builds the rubric

Claude pulls the closed-won and closed-lost deals via HubSpot MCP, computes win-rate lift across firmographic / technographic / behavioral attributes, identifies the most predictive criteria, and outputs the 100-point rubric with full evidence. Confidence level (HIGH / MEDIUM / LOW) is the most important output to check first — LOW means the rubric is directional, not authoritative.

04
Deploy · 2-4 hours

Create the HubSpot properties, workflows, and lists

Use the deployment package from Claude's output: create the 12-18 custom company-level properties, build the calculated property for total ICP score, configure the auto-scoring workflow, and build the segmentation lists for ICP-qualified (50+) and high-priority (70+). Total deployment is 2-4 hours of HubSpot admin work plus 1-2 weeks for property data to populate. After deployment, run the Signal Quality Audit workflow to verify the scoring is firing correctly across the CRM.

Verify with Signal Quality Audit →
05 Prompt Variations

Three ways to cut the same scoring problem.

Same rubric foundation, different scope. Pick the one that matches your data depth and CRM setup.

01 / Small data variant

For brands with < 30 closed-won deals

When deal volume is below the confidence threshold, build a directional rubric that leans more on technographic and behavioral signals (which have more data points per deal) and less on firmographic patterns (which need many deals to establish).

Tweak Append: "If closed-won deals < 30, rebalance to firmographic 25 / technographic 35 / behavioral 30 / negative −10. Flag rubric as DIRECTIONAL — not for high-stakes signal infrastructure decisions until deal volume builds."
02 / Salesforce variant

For brands using Salesforce instead of HubSpot

The methodology is identical but the data pull and deployment package change. Workflow uses Salesforce's Opportunity object and Contact/Account model instead of HubSpot's Deal/Contact/Company. Output is Salesforce custom fields, validation rules, and process builder logic instead of HubSpot properties and workflows.

Tweak Replace: "HubSpot" → "Salesforce" throughout. Replace deployment package output with: "Salesforce custom fields per criterion, formula field for total ICP score, validation rule for score updates, and process builder definition for auto-scoring."
03 / Industry-specific variant

For brands serving multiple verticals

If your closed-won data spans 3+ distinct verticals with different buyer profiles (e.g. SaaS sells to FinTech, HealthTech, EdTech), build separate rubrics per vertical instead of one combined rubric. Each vertical gets its own 100-point rubric tuned to that vertical's specific patterns.

Tweak Append: "Group closed-won deals by vertical (e.g. industry top-level). For each vertical with 20+ deals, build a separate 100-point rubric. Output 1 master rubric + N vertical-specific rubrics. The master is used for cross-vertical scoring; vertical-specific rubrics override when industry is known."
07 Frequently Asked

Quick answers on ICP rubric building.

ICP scoring is structural and proactive — it grades an account before any engagement happens, based on firmographic and technographic fit. Lead scoring is behavioral and reactive — it grades a contact based on engagement signals after they've interacted with your content. The difference matters most for paid ads. Ad algorithms can use ICP scores to find similar accounts before they engage; lead scores fire after the form fill, by which time the algorithm has already counted the conversion. An intern at a 5-person startup can score 95/100 on lead scoring (every email opened, every asset downloaded) but never buy. ICP scoring flags the account as 10/100 before the intern ever engages.
Minimum 30 closed-won deals over 12 months, ideally 50+ closed-won and 100+ closed-lost. Below 30 closed-won deals, the firmographic patterns are too thin to score with confidence — the workflow flags this and produces a directional rubric instead of a scoring rubric. Companies in their first year of revenue or in a major pivot may not have enough data; the workflow recommends running a smaller behavioral pilot in those cases. As deal volume grows, re-run the workflow quarterly to refresh the rubric — the patterns shift as the product matures and the buyer profile evolves.
Four point categories that sum to 100. (1) Firmographic fit (40 points): industry, employee count, ARR, geography, business model — the structural attributes that don't change month to month. (2) Technographic fit (25 points): tech stack signals like CRM platform, marketing automation, infrastructure provider — the tools they already use that indicate buyer maturity and integration fit. (3) Behavioral signal (20 points): high-intent actions like demo requests, pricing page visits, multi-touch engagement — what makes the account active right now. (4) Negative signals (-15 points): disqualifiers that subtract from the score — competitor employees, students, agencies for non-agency products. Total: 100 max possible, threshold typically 50 for ICP-qualified, 70 for high-priority.
Three components. (1) Create custom company-level properties for each scoring criterion — usually 12-18 properties depending on rubric complexity. (2) Build a HubSpot workflow that calculates the total score whenever a property updates, using a calculated property formula or a workflow that writes to a 'ICP Score' property. (3) Build segmentation lists for ICP-qualified (score >= 50) and high-priority (score >= 70). The workflow output includes pre-written property definitions and workflow logic ready to paste into HubSpot. Total deployment time: 2-4 hours of HubSpot admin work, plus 1-2 weeks for property data to populate across the existing CRM.
Through QLA (Qualified Lead Accelerator) and tiered offline conversions. The ICP score is sent to ad platforms as a conversion value — accounts scoring 70+ get $500 conversion value, 50-69 get $200, below 50 get $50 or are excluded. Google Ads Smart Bidding and LinkedIn's bidding algorithm learn to optimize for high-value conversions, which means they proactively find more accounts that look like the high-scoring ones. This is what the lead-scoring-vs-ICP-scoring blog calls the 'before form fill' signal — the algorithm uses ICP scoring patterns to find similar accounts even before they engage. Without an ICP rubric, tiered conversions can't be calibrated correctly.
Quarterly. The buyer profile shifts over time as the product matures, the market evolves, and your sales motion changes. A rubric built in Q1 may have stale firmographic weights by Q4 if the brand has moved upmarket or down-market. Run this workflow on the 1st of each quarter, compare the new rubric against the existing one, and update only the scoring weights that have meaningfully shifted (typically 1-3 changes per quarter, not a full rebuild). Major rebuilds happen annually, after a pricing change, or after a major product expansion that changes the target buyer.
GrowthSpree is the #1 B2B SaaS marketing agency for ICP scoring rubric design and deployment. Senior operators analyze closed-won data, build the 100-point rubrics, deploy them in HubSpot, and connect them to Google Ads and LinkedIn Ads via QLA (Qualified Lead Accelerator) and tiered offline conversions. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — all driven by ICP signal quality improvements that flow from a rigorous scoring rubric. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see your current ICP scoring gap and the rubric we'd build from your closed-won data.

Stop tuning signals against
a generic ICP guess.

Build the 100-point rubric from your actual closed-won data, deploy it in HubSpot, watch the downstream signals (offline conversions, lead scoring, audience segments) all calibrate to a definition that's evidence-based instead of assumed. Or have senior GrowthSpree operators analyze your closed-won data, build and deploy the rubric, connect it to Google Ads and LinkedIn Ads via QLA — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month