A copy-paste Claude prompt that pulls 12 months of closed-won and closed-lost deals from HubSpot, surfaces the firmographic and behavioral patterns that separate winners from losers, and produces a 100-point scoring rubric ready to deploy. The foundational layer underneath every Track 05 signal — without a clear ICP definition, every offline conversion value, lead score, and audience segment is calibrated against guesses.
A B2B SaaS team builds tiered offline conversions: $50 for MQL, $200 for SQL, $1,500 for Closed-Won. They configure HubSpot lifecycle workflows, deploy QLA, ship the signal infrastructure. Six weeks in, paid CPL hasn't moved. The reason: the SQL definition includes "VP+ titles at companies > 50 employees" but the actual closed-won data shows their best customers are Director-level at companies of 20-200 employees. The whole signal stack was tuned against a generic ICP guess instead of the specific ICP their closed-won data was already revealing.
The deeper problem is that most B2B SaaS teams write their ICP in a sales onboarding doc and never update it. The doc was right for the first 10 customers. By the time you have 100 customers, the patterns have shifted — the product matured, certain verticals overperformed, certain titles converted at 3x the rate of others. But the ICP definition in the sales handbook still reads "100-1000 employee tech companies, VP+ buyers." Every downstream signal — lead scoring, offline conversions, audience segments, ad bidding — gets tuned against that stale definition.
This workflow rebuilds the ICP definition from the actual data. Claude pulls 12 months of closed-won and closed-lost deals, computes the firmographic and behavioral patterns that separate winners from losers, weights each criterion by predictive power, and outputs a 100-point scoring rubric. The output isn't a doc — it's a HubSpot deployment package: property definitions, workflow logic, and threshold guides that translate directly into operational scoring. Run quarterly to keep the rubric current as the buyer profile shifts.
The gold variables — your brand, target ACV range, and known disqualifiers — are the parts you edit. The workflow needs at least 30 closed-won deals in the last 12 months to produce a confident rubric.
hubspot via the free Growthspree MCP. Authorize HubSpot read access to deals, contacts, and companies. The workflow needs at least 12 months of deal history with closed-won and closed-lost recorded — ideally 30+ closed-won and 50+ closed-lost.Sample output for a hypothetical mid-market B2B SaaS brand with 47 closed-won and 89 closed-lost deals over 12 months. Confidence level: HIGH (above the 30-deal threshold).
Pulling 12 months of closed-won and closed-lost deals from HubSpot now.
Analyzed 47 closed-won and 89 closed-lost deals over the last 12 months. Confidence: HIGH (above the 30-deal threshold). Most predictive single criterion: company employee count — the 50-200 employee range shows a 38% win rate vs 11% baseline (3.4x lift).
icp_employee_count
10
icp_industry
10
icp_arr_band
8
icp_geography
6
icp_funding_stage
6
icp_crm_platform
8
icp_marketing_automation
9
icp_cloud_native
8
icp_demo_requested
8
icp_pricing_visits
6
icp_multi_touch
6
icp_neg_agency
−5
icp_neg_micro
−5
icp_neg_student
−5
The buyer profile shifts over quarters as the product matures. Build the rubric once, refresh the weights quarterly, full rebuild annually.
The workflow needs at least 30 closed-won deals + 50 closed-lost deals over 12 months to produce a confident rubric. Below those thresholds, the firmographic patterns are too thin to score reliably. Run a quick HubSpot deal report filtered by closed date in the last 12 months to verify. Companies in their first year of revenue or a recent pivot may not have enough data — see the variations section for the small-data approach.
Edit the gold variables — your brand, target ACV range, average sales cycle, and known disqualifiers from sales. The known disqualifiers field is critical — sales has often noticed patterns that don't show up in pure data analysis (e.g. "anyone from agencies always churns within 6 months"). These get baked into the negative signals category. Save the configured prompt as a markdown file you reuse quarterly.
Claude pulls the closed-won and closed-lost deals via HubSpot MCP, computes win-rate lift across firmographic / technographic / behavioral attributes, identifies the most predictive criteria, and outputs the 100-point rubric with full evidence. Confidence level (HIGH / MEDIUM / LOW) is the most important output to check first — LOW means the rubric is directional, not authoritative.
Use the deployment package from Claude's output: create the 12-18 custom company-level properties, build the calculated property for total ICP score, configure the auto-scoring workflow, and build the segmentation lists for ICP-qualified (50+) and high-priority (70+). Total deployment is 2-4 hours of HubSpot admin work plus 1-2 weeks for property data to populate. After deployment, run the Signal Quality Audit workflow to verify the scoring is firing correctly across the CRM.
Verify with Signal Quality Audit →Same rubric foundation, different scope. Pick the one that matches your data depth and CRM setup.
When deal volume is below the confidence threshold, build a directional rubric that leans more on technographic and behavioral signals (which have more data points per deal) and less on firmographic patterns (which need many deals to establish).
The methodology is identical but the data pull and deployment package change. Workflow uses Salesforce's Opportunity object and Contact/Account model instead of HubSpot's Deal/Contact/Company. Output is Salesforce custom fields, validation rules, and process builder logic instead of HubSpot properties and workflows.
If your closed-won data spans 3+ distinct verticals with different buyer profiles (e.g. SaaS sells to FinTech, HealthTech, EdTech), build separate rubrics per vertical instead of one combined rubric. Each vertical gets its own 100-point rubric tuned to that vertical's specific patterns.
Build the 100-point rubric from your actual closed-won data, deploy it in HubSpot, watch the downstream signals (offline conversions, lead scoring, audience segments) all calibrate to a definition that's evidence-based instead of assumed. Or have senior GrowthSpree operators analyze your closed-won data, build and deploy the rubric, connect it to Google Ads and LinkedIn Ads via QLA — the same operating motion run across 300+ B2B SaaS accounts.