A daily Claude prompt that scans Google Ads, LinkedIn Ads, Meta, and HubSpot pipeline in parallel, surfaces yesterday's anomalies (CPA spikes, conversion drops, audience saturation, MQL volume crashes), and deduplicates root causes when one issue manifests across multiple platforms. Output is the 1-3 things that need attention today — not a 12-row alert list.
A B2B SaaS team's GTM tracking pixel breaks on a Tuesday. By Wednesday, Google Ads conversion rate is down 28%, LinkedIn shows a 31% drop, Meta shows a 24% drop, and HubSpot MQLs are flat (because demos aren't being recorded). Each platform's individual dashboard shows a single-digit alert. Nobody panics. By Friday, $4,200 in ad spend has produced almost no recorded conversions, but Smart Bidding has started ramping spend up further to "find more conversions." By the next Monday, the ratio is bad enough that someone notices, but the bidding algorithm has now learned from a week of broken data. The fix takes 2-3 weeks to recover from. The original issue was a single pixel.
The deeper problem is that cross-platform anomalies are invisible from any single dashboard. A Google Ads CPA spike of 28% looks like noise. A LinkedIn conversion drop of 31% looks like seasonality. A HubSpot MQL flat-line looks like sales pipeline being slow. Looked at separately, each is a yellow signal. Looked at together with the same root cause, they're a five-alarm fire. Most B2B SaaS teams catch these issues 2-3 weeks late because nobody is running the cross-platform scan.
This workflow runs the cross-platform scan in 12 minutes. Claude pulls yesterday's metrics from all 4 platforms in parallel, compares to 30-day baselines, surfaces every anomaly above threshold, then runs a deduplication pass that groups platform-specific anomalies into actual root causes. Output is the 1-3 highest-priority issues that need attention today, each with a root-cause hypothesis and recommended response. Run daily before standup. The cost of catching one tracking pixel breakage on day 2 instead of day 14 pays for the whole year of daily scans.
The gold variables — your brand, anomaly thresholds, and which platforms are active — are the parts you edit. Run as the first thing each morning before standup.
google_ads, linkedin_ads, and hubspot via the free Growthspree MCP. Add Meta if you run Facebook/Instagram for B2B SaaS retargeting. The deduplication step is what makes cross-platform faster than single-platform — without it, 4 platforms produce 4× the alert noise.Sample output for a hypothetical mid-market B2B SaaS account running Google Ads + LinkedIn Ads + HubSpot ($35K/mo total spend). The scan finds 7 alerts which dedup down to 3 actual root causes — one urgent, one adjust, one monitor.
Pulling yesterday's metrics from all 3 active platforms in parallel.
Scan complete. 15 cells in the anomaly battery (5 metrics × 3 active platforms). Distribution: 7 HEALTHY · 1 WARN · 7 ALERT · 0 N/A. Root-cause dedup collapses 7 alerts into 3 actual issues. Most urgent: tracking pixel firing inconsistently — 4 alerts share this root cause.
Configure once, run as the first thing each morning, escalate to deeper workflows when daily anomalies indicate systemic issues.
Install the free Growthspree MCP and authorize all 3 platforms. Add Meta only if you run Facebook/Instagram for B2B SaaS retargeting. The workflow gracefully handles missing platforms — if you only run Google + HubSpot, the grid shrinks to 2 platforms and the dedup logic still works. Do not authorize platforms you don't use — adds noise without signal.
Default thresholds (30% CPA spike, 25% CVR drop, 50% spend variance, 12/wk frequency, 25% MQL drop) work for typical mid-market B2B SaaS. After 2-3 weeks of running, check for false-positive patterns: if you're getting 4+ alerts/day consistently with no real issues, loosen the thresholds. If you're getting 0 alerts/day for 7+ days running, tighten them. The goal is 1-3 actual root causes per scan — fewer means thresholds are too loose, more means they're too tight.
The 4-minute scan happens in parallel across platforms (the actual data pull is fast; the analysis and dedup is what takes the most time). Optimal timing is 8:30-9:00 AM local — late enough that yesterday's data has finalized in all platforms, early enough to act on URGENT findings before standup. Keep the prompt as a saved markdown file you re-run unchanged each day.
URGENT findings need action today. ADJUST findings can wait 24 hours. MONITOR findings need 2-3 days of tracking before action. When daily anomalies indicate systemic issues, escalate to the relevant Track 04 deep-dive workflow: persistent CPA spikes → Google Ads waste audit. Repeated frequency saturation → LinkedIn Ads waste audit. Tracking issues that recur → Track 05's signal quality audit. The daily workflow is a router; the deep-dive workflows are where the actual fixes happen.
Track 04 workflow library →Same anomaly framework, different scope. Pick the one that matches your account volatility and team size.
Daily volume on smaller accounts is often too thin for daily anomaly detection — single-day fluctuations dominate the signal. Switch to weekly cadence: scan every Monday morning covering the prior week's data. Same anomaly battery but with thresholds calibrated against weekly (not daily) baselines.
For teams running this workflow as part of an automated daily routine. Output is reformatted as a Slack-ready message with structured sections (anomaly grid as a code block, root causes as numbered list, action items with @-mentions of relevant team members).
For the 2-week window after launching a new campaign or audience. New launches are inherently volatile but also produce the most expensive anomalies if missed (a misconfigured campaign can burn $5K+ in 48 hours). Tighten thresholds during launch windows to catch issues earlier.
Run the cross-platform anomaly scan as the first thing each morning. 12 minutes. 1-3 actionable findings. Most days nothing's broken. Some days you save the week. Or have senior GrowthSpree operators run the daily anomaly scan across your platforms, route findings to your Slack with root-cause hypothesis attached, and run the deep-dive workflows when anomalies indicate systemic issues — the same operating motion run across 300+ B2B SaaS accounts.