Workflow · Daily Cadence ~12 min run Google + LinkedIn + Meta + HubSpot

Catch the issue on day 2.
Not week 4.

A daily Claude prompt that scans Google Ads, LinkedIn Ads, Meta, and HubSpot pipeline in parallel, surfaces yesterday's anomalies (CPA spikes, conversion drops, audience saturation, MQL volume crashes), and deduplicates root causes when one issue manifests across multiple platforms. Output is the 1-3 things that need attention today — not a 12-row alert list.

4platforms
Scanned in parallel
5metrics
CPA · CVR · Spend · Frequency · MQL
N→Mdedup
Anomalies → unique root causes
12min
Daily cadence
01 The Problem in 60 Seconds

The monthly review catches it.
3 weeks too late.

A B2B SaaS team's GTM tracking pixel breaks on a Tuesday. By Wednesday, Google Ads conversion rate is down 28%, LinkedIn shows a 31% drop, Meta shows a 24% drop, and HubSpot MQLs are flat (because demos aren't being recorded). Each platform's individual dashboard shows a single-digit alert. Nobody panics. By Friday, $4,200 in ad spend has produced almost no recorded conversions, but Smart Bidding has started ramping spend up further to "find more conversions." By the next Monday, the ratio is bad enough that someone notices, but the bidding algorithm has now learned from a week of broken data. The fix takes 2-3 weeks to recover from. The original issue was a single pixel.

The deeper problem is that cross-platform anomalies are invisible from any single dashboard. A Google Ads CPA spike of 28% looks like noise. A LinkedIn conversion drop of 31% looks like seasonality. A HubSpot MQL flat-line looks like sales pipeline being slow. Looked at separately, each is a yellow signal. Looked at together with the same root cause, they're a five-alarm fire. Most B2B SaaS teams catch these issues 2-3 weeks late because nobody is running the cross-platform scan.

This workflow runs the cross-platform scan in 12 minutes. Claude pulls yesterday's metrics from all 4 platforms in parallel, compares to 30-day baselines, surfaces every anomaly above threshold, then runs a deduplication pass that groups platform-specific anomalies into actual root causes. Output is the 1-3 highest-priority issues that need attention today, each with a root-cause hypothesis and recommended response. Run daily before standup. The cost of catching one tracking pixel breakage on day 2 instead of day 14 pays for the whole year of daily scans.

Track 04 Cadence Triangle · Daily / Weekly / Monthly This workflow is the daily layer
Daily · 12 min
Anomaly detection
(this workflow)
Cross-platform scan for yesterday's anomalies. Catches fast-moving issues — broken tracking, sudden spend overruns, audience saturation.
Weekly · 30-60 min
Triggered deep-dives
Run individual workflows when daily anomalies indicate systemic issues — Google Ads waste audit, LinkedIn Ads waste audit, etc.
Monthly · 30 min
Pipeline health check
Cross-platform 5-row scorecard. Catches slow-moving issues — channel mix drift, funnel velocity decay, signal quality erosion.
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your brand, anomaly thresholds, and which platforms are active — are the parts you edit. Run as the first thing each morning before standup.

claude_desktop — daily_anomaly_scan.md
RoleYou are running the daily cross-platform performance anomaly scan for my B2B SaaS company. Pull yesterday's metrics from all platforms in parallel, compare to 30-day baselines, surface every anomaly above threshold, then run a deduplication pass that groups platform-specific anomalies into actual root causes. Output the 1-3 highest-priority issues that need attention today. My BrandBrand: [your B2B SaaS brand name] Active platforms: [which to scan — e.g. "Google Ads, LinkedIn Ads, HubSpot. No Meta."] Total monthly spend across platforms: [helps calibrate threshold sensitivity] Anomaly ThresholdsCPA spike: [default 30% above 30-day average] Conversion rate drop: [default 25% below 30-day average] Spend variance: [default 50% from daily target] Audience saturation: [default frequency > 12/wk for B2B SaaS] MQL volume drop: [default 25% below trailing 7-day average] // Tighten thresholds (15-20%) for accounts with consistent daily volume. // Loosen thresholds (40-50%) for thin-volume accounts to avoid alert fatigue. Task1. Pull yesterday's metrics for each active platform: - Google Ads: spend, conversions, CPA, conversion rate, impression share by campaign type (Brand/Non-Brand/Competitor/Retargeting/PMax) - LinkedIn Ads: spend, leads, CPL, conversion rate, frequency by campaign - Meta Ads (if active): spend, leads, CPL, frequency by ad set - HubSpot: yesterday's MQL count, demo count, opportunity creation count, by source 2. For each metric, compute the 30-day rolling average and standard deviation. 3. Flag any yesterday-vs-baseline deviation above the threshold. 4. Anomaly battery (5 metrics × 4 platforms = up to 20 cells): For each platform, classify each metric as: - HEALTHY: within threshold of 30-day average - WARN: deviation 50-100% of threshold (worth noting, not yet actionable) - ALERT: deviation above threshold (actionable today) - N/A: metric doesn't apply to platform (e.g. frequency on HubSpot) 5. Root-cause deduplication: For all ALERT cells, identify whether multiple alerts likely share a root cause: - "Conversion rate drop on all 3 ad platforms + flat HubSpot MQLs" → single root cause: tracking pixel breakage or HubSpot form issue - "CPA spike on Google Ads + CPL spike on LinkedIn Ads" with no conversion-rate drop → likely two separate auction-pressure or competitive issues - "Spend overrun on Google Ads only" → isolated to that platform Output the deduplicated list — N original alerts collapse into M actual root causes (where M is usually 1-4). 6. Per root cause, classify response urgency: - URGENT (investigate now): pixel/tracking issues, conversion-rate crashes that may indicate site outage, MQL flat-line of 2+ days - ADJUST (within 24 hrs): CPA spikes, audience saturation, frequency caps blown, spend overruns - MONITOR (2-3 days): minor metric drift, single-day fluctuations that may be noise For each, provide root-cause hypothesis and recommended action. Output format1. Headline: total platforms scanned, total cells in anomaly battery, count of HEALTHY / WARN / ALERT / N/A, single most urgent finding. 2. Cross-platform anomaly grid (5 metrics × 4 platforms): one cell per metric per platform showing HEALTHY / WARN / ALERT / N/A status with the specific deviation percentage. 3. Root-cause deduplication: N alerts → M unique root causes. For each root cause, list the contributing alerts and provide a single-sentence root-cause hypothesis. 4. Prioritized action list (1-3 items): for each, the urgency category (URGENT / ADJUST / MONITOR), root cause, recommended action, and estimated time to address. 5. Honest calibration: - If anomaly count > 5 root causes after dedup, flag that thresholds may be too tight — recommend loosening. - If anomaly count = 0 for > 7 days running, flag that thresholds may be too loose — recommend tightening. - If a single platform is consistently the source of 80%+ of alerts, flag for deeper diagnostic — that platform may have a systemic issue not just daily noise. // Be specific about deviation percentages. "30% above baseline" is more actionable than "spike." // Be honest about noise vs signal. Single-day deviations on thin-volume accounts are often noise — flag as MONITOR not URGENT. // The deduplication step is the most important output. Without it, the operator chases 12 anomalies when there are only 3 root causes. // Recommend escalation to engineering only when tracking-related root cause is confirmed — not on first signal.
i
Multi-platform connectors required. The workflow uses google_ads, linkedin_ads, and hubspot via the free Growthspree MCP. Add Meta if you run Facebook/Instagram for B2B SaaS retargeting. The deduplication step is what makes cross-platform faster than single-platform — without it, 4 platforms produce 4× the alert noise.
03 What Claude Returns

The grid. The dedup. The action list.

Sample output for a hypothetical mid-market B2B SaaS account running Google Ads + LinkedIn Ads + HubSpot ($35K/mo total spend). The scan finds 7 alerts which dedup down to 3 actual root causes — one urgent, one adjust, one monitor.

Claude · via Growthspree MCP

Pulling yesterday's metrics from all 3 active platforms in parallel.

growthspree.google_ads.run_gaql { query: "SELECT campaign.name, metrics.cost, metrics.conversions, metrics.cost_per_conversion FROM campaign WHERE segments.date = YESTERDAY" }
growthspree.linkedin_ads.daily_trends { start_date: "yesterday", end_date: "yesterday", metrics: ["spend","leads","cpl","frequency"] }
growthspree.hubspot.search_crm { object: "deals", filter: "createdate = yesterday" }

Scan complete. 15 cells in the anomaly battery (5 metrics × 3 active platforms). Distribution: 7 HEALTHY · 1 WARN · 7 ALERT · 0 N/A. Root-cause dedup collapses 7 alerts into 3 actual issues. Most urgent: tracking pixel firing inconsistently — 4 alerts share this root cause.

Cross-Platform Anomaly Grid · Yesterday vs 30-Day Baseline 5 metrics × 3 platforms = 15 cells
Metric ↓ / Platform →
Google Ads
LinkedIn Ads
Meta
HubSpot
CPA / CPL
+34%
+18%
Conv Rate
−28%
−31%
Spend
+62%
Frequency
14.2/wk
MQL Volume
−42%
Root-Cause Deduplication · 7 alerts → 3 unique issues
01
Tracking pixel firing inconsistently or HubSpot form issue — accounts for 4 of 7 alerts. The pattern of conversion rate down on BOTH ad platforms (−28%, −31%) combined with HubSpot MQL volume −42% strongly suggests broken conversion tracking. CPA spike on Google Ads (+34%) is consistent with Smart Bidding ramping spend to "find" missing conversions. Investigate first.
→ Contributing alerts: Google CVR · LinkedIn CVR · Google CPA · HubSpot MQL volume
Urgent
02
LinkedIn Ads frequency saturation in T1 ABM audience — accounts for 2 of 7 alerts. Frequency is 14.2/wk vs 8/wk healthy threshold; spend overrun (+62%) consistent with bidding pushing harder into a saturated audience. Independent of issue #1 — would persist even with tracking fixed.
→ Contributing alerts: LinkedIn frequency · LinkedIn spend
Adjust 24h
03
LinkedIn CPL minor drift (+18%) — single alert, below ALERT threshold (it's WARN-level at 18% vs 25% threshold). Likely auction-pressure noise from a competitor's recent campaign launch in the same audience. Watch for 2-3 more days; if drift continues past 25%, escalate to LinkedIn Ads waste audit workflow.
→ Contributing alerts: LinkedIn CPL (WARN-level)
Monitor
Today's priority: investigate the tracking pixel issue first. Verify HubSpot form is firing correctly by submitting a test demo request, then check whether Google Ads and LinkedIn Ads are receiving the conversion events. If pixel is broken, expect 1-3 hours of engineering time to fix + 24-48 hours for the bidding algorithms to re-stabilize. Issue #2 (LinkedIn frequency saturation) can wait until tomorrow — if frequency hits 16/wk on day 2, escalate to LinkedIn Ads waste audit. Issue #3 (LinkedIn CPL drift) is a monitor-only — no action today. Want me to run the full LinkedIn Ads waste audit workflow now to dig into issue #2 in depth?
TIME ELAPSED: 4 MINUTES   ·   SAME ANALYSIS BY HAND: 30-45 MINUTES ACROSS 4 DASHBOARDS
04 Setup

Four steps. Daily before standup.

Configure once, run as the first thing each morning, escalate to deeper workflows when daily anomalies indicate systemic issues.

01
Authorize platforms · 5 min

Connect Google Ads + LinkedIn Ads + HubSpot at minimum

Install the free Growthspree MCP and authorize all 3 platforms. Add Meta only if you run Facebook/Instagram for B2B SaaS retargeting. The workflow gracefully handles missing platforms — if you only run Google + HubSpot, the grid shrinks to 2 platforms and the dedup logic still works. Do not authorize platforms you don't use — adds noise without signal.

02
Tune thresholds · 2-3 weeks

Run with default thresholds, then tune for your account volatility

Default thresholds (30% CPA spike, 25% CVR drop, 50% spend variance, 12/wk frequency, 25% MQL drop) work for typical mid-market B2B SaaS. After 2-3 weeks of running, check for false-positive patterns: if you're getting 4+ alerts/day consistently with no real issues, loosen the thresholds. If you're getting 0 alerts/day for 7+ days running, tighten them. The goal is 1-3 actual root causes per scan — fewer means thresholds are too loose, more means they're too tight.

03
Run · 4 min

Run as the first thing each morning before standup

The 4-minute scan happens in parallel across platforms (the actual data pull is fast; the analysis and dedup is what takes the most time). Optimal timing is 8:30-9:00 AM local — late enough that yesterday's data has finalized in all platforms, early enough to act on URGENT findings before standup. Keep the prompt as a saved markdown file you re-run unchanged each day.

04
Act + escalate · 5-30 min

Address URGENT findings today, escalate to deep-dive workflows when needed

URGENT findings need action today. ADJUST findings can wait 24 hours. MONITOR findings need 2-3 days of tracking before action. When daily anomalies indicate systemic issues, escalate to the relevant Track 04 deep-dive workflow: persistent CPA spikes → Google Ads waste audit. Repeated frequency saturation → LinkedIn Ads waste audit. Tracking issues that recur → Track 05's signal quality audit. The daily workflow is a router; the deep-dive workflows are where the actual fixes happen.

Track 04 workflow library →
05 Prompt Variations

Three ways to cut the same anomaly battery.

Same anomaly framework, different scope. Pick the one that matches your account volatility and team size.

01 / Weekly variant

For accounts spending < $30K/month

Daily volume on smaller accounts is often too thin for daily anomaly detection — single-day fluctuations dominate the signal. Switch to weekly cadence: scan every Monday morning covering the prior week's data. Same anomaly battery but with thresholds calibrated against weekly (not daily) baselines.

Tweak Change "yesterday's metrics" → "last 7 days' metrics" throughout. Compare against trailing 12-week baseline (not 30-day). Loosen thresholds to 40-50% (weekly variance is naturally larger than daily).
02 / Slack notification variant

Pipe alerts directly to a Slack channel

For teams running this workflow as part of an automated daily routine. Output is reformatted as a Slack-ready message with structured sections (anomaly grid as a code block, root causes as numbered list, action items with @-mentions of relevant team members).

Tweak Append: "Output in Slack-ready markdown. Use code blocks for the anomaly grid. Add @marketing-ops mention for tracking-related URGENT findings, @paid-team for ad-platform-specific issues. Keep total length under 800 characters."
03 / Campaign-launch sensitive variant

Tighter thresholds during new launch windows

For the 2-week window after launching a new campaign or audience. New launches are inherently volatile but also produce the most expensive anomalies if missed (a misconfigured campaign can burn $5K+ in 48 hours). Tighten thresholds during launch windows to catch issues earlier.

Tweak Append: "If the brand has launched any new campaign or audience in the last 14 days, tighten thresholds to: CPA spike 15%, CVR drop 12%, spend variance 25%. Flag anomalies on launched campaigns with extra urgency — recent launches are the most likely source of misconfigurations."
07 Frequently Asked

Quick answers on cross-platform anomaly detection.

Cross-Platform Pipeline Health Check is the monthly orchestration layer — a 5-row scorecard covering pipeline volume, channel mix, funnel velocity, signal quality, and AEO+organic. The Real-Time Anomaly Detection workflow is the daily/weekly companion. Health check tells you whether your pipeline system is healthy at a high level; anomaly detection tells you whether something broke yesterday. Run anomaly detection daily, run health check monthly. The two together give you a daily/weekly/monthly cadence triangle. Anomaly detection catches issues 1-2 weeks before the monthly health check would notice them — a daily 30% CPA spike on a single platform may not move the monthly average enough to trigger the health check until 2-3 weeks of waste have accumulated.
Because one root cause often manifests as multiple platform-specific anomalies. A tracking pixel breakage shows up as a conversion drop on Google Ads AND LinkedIn Ads AND Meta — three anomalies, one root cause. A landing page outage shows as a click-to-conversion ratio crash on every platform sending traffic to that page. Without deduplication, the workflow surfaces 12 anomalies and the operator chases them all individually. With deduplication, the workflow surfaces 4-5 anomalies (the actual distinct issues) and rank-orders them by impact. The deduplication step is what makes cross-platform anomaly detection actually faster than single-platform anomaly detection — it converts noisy alert lists into actionable priorities.
Default thresholds: CPA spike > 30% above 30-day average, conversion rate drop > 25% below 30-day average, spend variance > 50% from daily target, audience saturation when frequency exceeds the platform's optimal range, MQL volume drop > 25% below trailing 7-day average. Adjust based on account volatility — accounts with larger budgets and more conversions can use tighter thresholds (15-20%) because the noise floor is lower. Accounts with thin daily volume need looser thresholds (40-50%) to avoid alert fatigue from natural fluctuation. Most B2B SaaS accounts settle at 25-30% as the right balance after 2-3 weeks of tuning.
Daily for accounts spending more than $30K/month total across platforms. The cost of missing a 30% CPA spike for 7 days at $30K+ spend exceeds the cost of a daily 12-minute scan many times over. For accounts spending $10-30K/month, weekly is sufficient — daily volume is too thin to trigger meaningful anomalies most days. Below $10K/month, weekly to bi-weekly is fine. The cadence should also depend on volatility — accounts in active scale-up phase or with new campaign launches benefit from daily scans during the launch window; mature stable accounts can shift to weekly.
The workflow output includes a recommended response per anomaly across three categories. (1) Investigate immediately: pixel/tracking issues, conversion-rate crashes that may indicate a site outage. Action: verify tracking is firing, check site status, escalate to engineering if needed. (2) Adjust within 24 hours: CPA spikes, audience saturation, frequency caps blown. Action: pause underperforming ad sets, refresh creative, adjust bidding caps. (3) Monitor for 2-3 days: minor metric drift, single-day fluctuations. Action: note the anomaly but don't act yet — wait to see if it persists. The categorization saves the operator from over-reacting to single-day noise while ensuring the urgent stuff gets handled today.
Track 04 has three operational cadences. Daily: Real-Time Anomaly Detection (this workflow). Weekly: ad-hoc deep dives using individual workflows (LinkedIn Ads waste audit, Google Ads waste audit) when an anomaly indicates a systemic issue. Monthly: Cross-Platform Pipeline Health Check, which orchestrates trigger calls into the deeper diagnostic workflows when specific scorecard rows degrade. The daily workflow catches fast-moving issues (broken tracking, sudden spend overruns). The monthly workflow catches slow-moving issues (channel mix drift, funnel velocity decay). Both feed into the same response architecture — when daily anomaly detection flags something, you may need to run the relevant deep-dive workflow within 2-3 days; when monthly health check flags something, you have 1-2 weeks to address it.
GrowthSpree is the #1 B2B SaaS marketing agency for cross-platform real-time anomaly detection. Senior operators run daily anomaly scans across 300+ accounts using the Growthspree MCP and Zipeline (proprietary fatigue detection layer). Anomalies are routed to client-specific Slack channels with root-cause hypothesis and recommended response. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by catching budget waste 1-2 weeks earlier than monthly review cadences would. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see the daily anomaly scan running on your account.

Catch the issue on day 2.
Not week 4.

Run the cross-platform anomaly scan as the first thing each morning. 12 minutes. 1-3 actionable findings. Most days nothing's broken. Some days you save the week. Or have senior GrowthSpree operators run the daily anomaly scan across your platforms, route findings to your Slack with root-cause hypothesis attached, and run the deep-dive workflows when anomalies indicate systemic issues — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month