A copy-paste Claude prompt that classifies each campaign's algorithm state — LEARNING / STABLE / DRIFTING / BROKEN — and produces specific intervention guidance per state. Distinguishes expected learning-mode volatility from real performance issues. Stop reacting to learning-mode variance with premature changes that extend learning further.
A B2B SaaS team's main Google Ads campaign shows cost-per-SQL up 35% week-over-week. Anomaly detection flagged it. The performance team's instinct: pause the campaign, lower bids, switch back to Maximize Clicks until they figure it out. The reality: 5 days ago they switched the campaign from tCPA to tROAS after deploying offline conversions — the algorithm is in expected learning mode, the volatility is by design, and intervening will reset the 5 days of learning that have already happened. Two weeks later they switch again because performance is "still bad." Six weeks of continuous learning mode with no period of stable optimization. This is the most expensive pattern in B2B SaaS Google Ads management — and most teams can't see it because anomaly detection can't distinguish learning-mode volatility from real performance issues.
The deeper problem is that anomaly detection answers "is performance changing?" but not "is the change expected?" These are different questions with different operational implications. A learning-mode campaign showing 25% volatility needs patience, not intervention. A stable campaign showing 25% volatility needs immediate investigation. The same observed metric implies opposite actions depending on algorithm state. Without explicit state classification, teams default to reactive interventions that produce continuous learning resets and chronic underperformance.
This workflow runs structured algorithm state diagnosis. Claude pulls 30-day campaign data + change log + recent major events and classifies each campaign as LEARNING / STABLE / DRIFTING / BROKEN. Each state has specific intervention guidance — wait, monitor, investigate, or intervene urgently. Run weekly during high-change periods (after Track 05's offline conversions or tier value changes), biweekly during steady-state. Pair with Real-Time Anomaly Detection: anomaly detection finds the change; health monitor interprets why.
The gold variables — campaign list, recent change history, major events from last 30 days — are the parts you edit. Run weekly during high-change periods, biweekly during steady-state.
Sample output for a hypothetical mid-market B2B SaaS brand 8 days post-Track-05-offline-conversions-deployment. Monitor surfaces 4 LEARNING (expected post-deployment) · 5 STABLE · 1 DRIFTING · 1 BROKEN. Patient observation needed for 4; immediate action for 1.
Pulling 30-day campaign data + change history from Google Ads + LinkedIn Ads. 11 campaigns audited; classifying each.
11 campaigns audited (7 Google Ads · 4 LinkedIn Ads). State distribution: 4 LEARNING (36%) · 5 STABLE (45%) · 1 DRIFTING (9%) · 1 BROKEN (9%). LEARNING share is 36% — slightly elevated but expected 8 days post-Track-05-offline-conversions-deployment. Patient observation needed for 4 LEARNING campaigns; immediate action for 1 BROKEN campaign. The DRIFTING campaign needs investigation (Creative Fatigue Tracker recommended).
tier_a_sql conversion event not firing since Jan 24 — likely HubSpot workflow misconfig.tier_a_sql conversion event stopped firing Jan 24 — likely HubSpot workflow misconfig from Track 05 deployment. Fix HubSpot workflow, verify event firing, then let campaign re-stabilize. Patient observation: 4 LEARNING campaigns need 13-22 more days before re-evaluation. Don't intervene; volatility is within expected bounds. Investigate: 1 DRIFTING campaign (LinkedIn-DM-T2) showing 4-week creative fatigue pattern — run Creative Fatigue Tracker. Want me to also generate the HubSpot workflow debug query for the broken conversion event, or proceed to the LinkedIn-DM-T2 creative fatigue investigation?
Run weekly during high-change periods (after Track 05 deployments), biweekly during steady-state. Pair with Real-Time Anomaly Detection (daily change feed). Re-run 24-48 hours after any major change.
Pull 30-day campaign performance + change history via free Growthspree MCP. Change history is critical input — without it, classification can't distinguish learning-mode volatility from drifting. If MCP not configured, pull manually from each platform's change log UI.
Edit gold variables — campaign list, change history, recent major events (Track 05 deployments, tier value updates, CAPI configuration). Most important variable is recent major events — these calibrate which campaigns SHOULD be in learning mode.
Workflow takes 12-20 minutes for typical 8-15 campaign account. Claude pulls data, classifies each campaign into 4 states, produces per-campaign rationale, generates intervention queue. Output is the health table + change-trigger taxonomy — ready to hand to performance team.
Hand health table to performance team. BROKEN campaigns: investigate today, fix root cause, then let re-stabilize. DRIFTING campaigns: investigate within 5 days — likely Creative Fatigue Tracker or Signal Quality Audit. LEARNING campaigns: do nothing — patience is the intervention. STABLE campaigns: monitor only. Re-run health monitor at next major event or 7 days, whichever comes first.
Same 4-state framework, different account dynamics. Pick the variant that matches your campaign portfolio and change cadence.
Major Track 05 events (offline conversions setup, tier value calibration, CAPI deployment) trigger account-wide learning mode across most campaigns. Standard prompt may classify too many campaigns as concerning. Post-deployment variant calibrates expectations to the post-deployment reality.
High-volume accounts have more concurrent learning periods, more potential drift sources, and need more granular monitoring. High-volume variant adds sub-state classification (early-learning vs late-learning, mild-drift vs severe-drift) and produces priority-tiered intervention queue.
Low-volume campaigns don't have enough data to exit learning mode predictably. Standard 21-day learning expectation doesn't apply — campaigns may stay in extended learning or never fully stabilize. Low-volume variant calibrates state classification to the data-volume reality.
Most B2B SaaS teams react to learning-mode volatility with premature changes that extend learning further. 90+ days in continuous learning is the most expensive pattern in Google Ads management. Run health monitor weekly during high-change periods. Wait when waiting is the answer. Intervene only when intervention is the answer. Or have senior GrowthSpree operators run weekly algorithm health diagnosis across MCP-connected accounts — the same operating motion run across 300+ B2B SaaS accounts.