Workflow · Diagnostic ~20 min run Google Ads + LinkedIn connectors

The change is real.
The cause might not be.

A copy-paste Claude prompt that classifies each campaign's algorithm state — LEARNING / STABLE / DRIFTING / BROKEN — and produces specific intervention guidance per state. Distinguishes expected learning-mode volatility from real performance issues. Stop reacting to learning-mode variance with premature changes that extend learning further.

4states
Learning · Stable · Drifting · Broken
7-21days
Typical Smart Bidding learning duration
15-30%volatility
Expected during learning mode
Weeklycadence
Biweekly during steady-state
01 The Problem in 60 Seconds

Performance volatility looks the same.
The causes don't.

A B2B SaaS team's main Google Ads campaign shows cost-per-SQL up 35% week-over-week. Anomaly detection flagged it. The performance team's instinct: pause the campaign, lower bids, switch back to Maximize Clicks until they figure it out. The reality: 5 days ago they switched the campaign from tCPA to tROAS after deploying offline conversions — the algorithm is in expected learning mode, the volatility is by design, and intervening will reset the 5 days of learning that have already happened. Two weeks later they switch again because performance is "still bad." Six weeks of continuous learning mode with no period of stable optimization. This is the most expensive pattern in B2B SaaS Google Ads management — and most teams can't see it because anomaly detection can't distinguish learning-mode volatility from real performance issues.

The deeper problem is that anomaly detection answers "is performance changing?" but not "is the change expected?" These are different questions with different operational implications. A learning-mode campaign showing 25% volatility needs patience, not intervention. A stable campaign showing 25% volatility needs immediate investigation. The same observed metric implies opposite actions depending on algorithm state. Without explicit state classification, teams default to reactive interventions that produce continuous learning resets and chronic underperformance.

This workflow runs structured algorithm state diagnosis. Claude pulls 30-day campaign data + change log + recent major events and classifies each campaign as LEARNING / STABLE / DRIFTING / BROKEN. Each state has specific intervention guidance — wait, monitor, investigate, or intervene urgently. Run weekly during high-change periods (after Track 05's offline conversions or tier value changes), biweekly during steady-state. Pair with Real-Time Anomaly Detection: anomaly detection finds the change; health monitor interprets why.

4-State Algorithm Classification · Each Campaign Falls Into One Different states require opposite actions
State 1 · Volatile by design Learning Algorithm in learning mode after major change (bidding strategy, conversion event, budget >30%). Volatility 15-30% expected. Duration: 7-21 days Google · 14-30 days LinkedIn. Action: WAIT
State 2 · Normal operation Stable In production 30+ days with no major changes. Performance varies within normal bounds (5-10% week-over-week). The state where most campaigns should live. Action: MONITOR
State 3 · Gradual erosion Drifting Performance gradually eroding over 4-6 weeks without clear cause. Cost per conversion creeping up 2-5% per week. Often signal quality, audience overlap, or creative fatigue. Action: INVESTIGATE
State 4 · Acute issue Broken Acute performance issue. Cost per conversion 2x+ baseline, conversion volume drops 50%+, or impressions collapsing. Often tracking issue, conversion event corruption, or audience problem. Action: URGENT
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — campaign list, recent change history, major events from last 30 days — are the parts you edit. Run weekly during high-change periods, biweekly during steady-state.

claude_desktop — algorithm_health_monitor.md
RoleYou are running the weekly Algorithm Health Monitor for my B2B SaaS ad accounts. Pull 30-day campaign data + change log + recent major events. Classify each campaign as LEARNING / STABLE / DRIFTING / BROKEN. Produce specific intervention guidance per state. My BrandBrand: [your B2B SaaS brand name] Site URL: [your domain] Average ACV: [e.g. "$25K mid-market"] Average sales cycle: [e.g. "84 days"] Ad AccountsGoogle Ads accounts: [list account IDs or names — pull via free Growthspree MCP] LinkedIn Ads accounts: [list account IDs or names] Audit scope: [All campaigns / Top 10 by spend / Specific campaigns named] Recent Change History (Last 30 Days)// Critical input — algorithm state depends heavily on what changed and when. Pull from ad platform change history if available. Bidding strategy changes: [list per campaign — e.g. "Search-Brand-USA: Manual CPC → tCPA on Jan 18"] Conversion event changes: [list — e.g. "Added tiered MQL/SQL/Opp/CW events Jan 22 (Track 05 deployment)"] Budget changes >30%: [list — e.g. "Search-Generic-USA budget doubled Jan 25"] Audience changes: [list — e.g. "LinkedIn-DM-T1 audience refreshed Jan 20"] Creative changes: [list — e.g. "LinkedIn-DM-T1 all 6 creatives replaced Jan 15"] Optional: Recent Major EventsTrack 05 offline conversions deployment date: [date or "n/a"] Track 05 tier value recalibration date: [date or "n/a"] LinkedIn CAPI deployment date: [date or "n/a"] Other major events: [any other significant changes] Task1. Pull campaign performance data (last 30 days): - Cost per conversion (daily series) - Conversion volume (daily series) - Impression volume (daily series) - Bid behavior (avg CPC variance) - Bidding strategy + when last changed 2. Classify each campaign into 1 of 4 states: LEARNING: - Major change occurred < 21 days ago (bidding strategy, conversion event, budget >30%, audience overhaul) - Performance volatility 15-30% week-over-week (within expected learning bounds) - Days remaining in learning: estimate based on Smart Bidding heuristics STABLE: - No major changes in last 30 days - Performance varies within 5-10% week-over-week - Cost per conversion within 15% of 60-day baseline DRIFTING: - No major changes in last 30 days - Cost per conversion creeping up 2-5% per week consistently for 4+ weeks - Conversion rate declining 1-3% per week - Often signals: signal quality erosion, audience overlap with competitors, creative fatigue BROKEN: - Cost per conversion 2x+ baseline OR conversion volume drops 50%+ OR impressions collapsing - Acute change (last 7 days), not gradual - Often signals: tracking broken, conversion event corruption, audience problem, account suspension warning 3. Per campaign, produce: - Current state (1 of 4) - Days in current state - Rationale (which signals indicate this state) - Recommended action: WAIT (learning) / MONITOR (stable) / INVESTIGATE (drifting) / URGENT (broken) - Estimated time to next reassessment (e.g. "wait 9 more days then re-check") 4. Aggregate state distribution: - State counts and percentages across audited campaigns - High-priority interventions (BROKEN campaigns + DRIFTING with significant erosion) - Patient observation queue (LEARNING campaigns with days-remaining) 5. Change-trigger taxonomy: - Recent changes by trigger type - Expected learning duration per change type - Stacking warning if multiple changes happened to same campaign within 14 days (compound learning risk) Output format1. Headline: total campaigns audited, state distribution, top intervention priorities, learning-mode patience queue. 2. Status distribution bar (visual): % per state across audited campaigns. 3. Per-campaign health table: campaign / state / days in state / rationale / action. 4. Change-trigger taxonomy: recent changes mapped to expected learning impact. 5. Honest calibration: - If LEARNING is > 30% of campaigns, the team is making too many concurrent changes. Recommend pausing all non-urgent changes for 14 days to let learning complete. - If BROKEN is > 10% of campaigns, there's likely a systemic tracking or signal issue (often offline conversions misfiring, ICP scoring stale, or audience exclusion misconfigured). Investigate Track 05's signal quality before fixing campaigns individually. - If DRIFTING is > 20% of campaigns, the team is past-due on creative refresh, audience refresh, or signal quality audit. Prioritize Track 04's Creative Fatigue Tracker + Track 05's Signal Quality Audit. - For Smart Bidding learning, conversion volume threshold matters: campaigns with < 30 conversions/month don't have enough data to exit learning even after 21 days. Consider consolidating low-volume campaigns or extending learning expectations. - Don't classify a campaign as BROKEN within 21 days of a major change unless performance is catastrophic (cost per conversion 3x+ baseline). Most apparent issues during learning resolve themselves. // Be specific. "Volatility expected" is generic. "tCPA learning mode triggered Jan 18 by strategy switch from Manual CPC; volatility 22% within expected 15-30% bounds; estimated 6 more days to stable" is actionable. // Days-in-state matters as much as state. A LEARNING campaign at day 18/21 needs different attention than one at day 3/21. State trajectory is the operational signal. // If multiple changes stacked within 14 days on the same campaign, flag the compound learning risk explicitly. The campaign may take 30-45 days to stabilize, not 21. // For LinkedIn Smart Bidding specifically, learning durations are 50-70% longer than Google. Apply 1.5-2x multiplier to all learning estimates for LinkedIn campaigns.
i
Pair with Real-Time Anomaly Detection. Anomaly detection runs daily and flags performance changes; health monitor runs weekly and interprets whether changes are expected (learning) or unexpected (real issue). Together they form a Diagnostic Pair — daily change detection + weekly state interpretation. Real-Time Anomaly Detection workflow.
03 What Claude Returns

The state diagnosis. The intervention queue.

Sample output for a hypothetical mid-market B2B SaaS brand 8 days post-Track-05-offline-conversions-deployment. Monitor surfaces 4 LEARNING (expected post-deployment) · 5 STABLE · 1 DRIFTING · 1 BROKEN. Patient observation needed for 4; immediate action for 1.

Claude · campaign data + change log analysis

Pulling 30-day campaign data + change history from Google Ads + LinkedIn Ads. 11 campaigns audited; classifying each.

growthspree.google_ads.campaigns.performance { window: "last_30d", metrics: ["cost_per_conv","conv_volume","impressions","cpc_variance"], include: "change_history" }
growthspree.linkedin_ads.campaigns.performance { window: "last_30d", apply_linkedin_learning_multiplier: 1.7 }

11 campaigns audited (7 Google Ads · 4 LinkedIn Ads). State distribution: 4 LEARNING (36%) · 5 STABLE (45%) · 1 DRIFTING (9%) · 1 BROKEN (9%). LEARNING share is 36% — slightly elevated but expected 8 days post-Track-05-offline-conversions-deployment. Patient observation needed for 4 LEARNING campaigns; immediate action for 1 BROKEN campaign. The DRIFTING campaign needs investigation (Creative Fatigue Tracker recommended).

State Distribution · 11 Campaigns Audited 8 days post-offline-conversions deployment
LEARNING 36%
STABLE 45%
DRIFT 9%
BROKEN 9%
4 campaigns · WAIT
5 campaigns · MONITOR
1 campaign · INVESTIGATE
1 campaign · URGENT
Per-Campaign Health Table · 11 Campaigns Sorted by intervention priority
Campaign
State
Days
Rationale
Action
Search-Comp-Brand-USAGoogle · tCPA · $4.2K/mo
BROKEN
3 days
Cost per SQL 2.4× baseline last 3 days. Conversions down 60%. tier_a_sql conversion event not firing since Jan 24 — likely HubSpot workflow misconfig.
URGENT
Search-Generic-USAGoogle · tROAS · $8.5K/mo
LEARNING
8 / 21
tCPA → tROAS switch Jan 22 (offline conversions deployment). Volatility 24% — within 15-30% bounds. Wait 13 more days.
WAIT
PMax-GenericGoogle · MaxConvValue · $5.8K/mo
LEARNING
8 / 21
New tiered conversion values Jan 22. Volatility 19% — within bounds. PMax learning takes longer than Search; wait full 21 days.
WAIT
LinkedIn-DM-T1LinkedIn · MaxConv · $3.1K/mo
LEARNING
8 / 30
CAPI conversion events deployed Jan 22. LinkedIn learning is 1.7× longer than Google. Volatility 18% — within bounds. Wait 22 more days.
WAIT
LinkedIn-Influencer-T1LinkedIn · MaxConv · $2.0K/mo
LEARNING
8 / 30
CAPI deployment + audience refresh Jan 22. Stacked changes — extend learning expectation to 35 days. Volatility 27% — at high end of bounds.
WAIT
Search-Brand-USAGoogle · tCPA · $1.8K/mo
STABLE
52 days
No major changes since Dec 8. Cost per SQL within 8% of 60-day baseline. Conversion volume stable. Healthy operation.
MONITOR
Search-Brand-EUGoogle · tCPA · $1.2K/mo
STABLE
45 days
No changes recent. Cost per SQL within 7% of baseline. Healthy operation.
MONITOR
Search-Long-Tail-USAGoogle · MaxConv · $0.9K/mo
STABLE
38 days
Stable, low volume. Conversion volume 28/month — close to Smart Bidding minimum threshold. Consider consolidating.
MONITOR
PMax-Brand-DefenderGoogle · MaxConvValue · $1.5K/mo
STABLE
62 days
Stable defensive PMax campaign. ROAS within 6% of baseline. Healthy operation.
MONITOR
LinkedIn-Researcher-T2LinkedIn · MaxConv · $1.4K/mo
STABLE
41 days
Stable on existing form-fill conversion events (CAPI not yet deployed for this campaign). Performance flat but acceptable.
MONITOR
LinkedIn-DM-T2LinkedIn · MaxConv · $2.6K/mo
DRIFTING
28 days
No major changes. Cost per SQL up 3.2% per week for 4 weeks. Likely creative fatigue — same 6 ads running 47 days. Run Creative Fatigue Tracker.
INVESTIGATE
Change-Trigger Taxonomy · Last 30 Days Maps changes to expected learning impact
Trigger
Duration
Expected impact
Bidding strategy switch
7-21d
Manual CPC → tCPA, tCPA → tROAS, MaxConv → MaxConvValue. Single biggest learning trigger. Volatility 15-30% expected.
Conversion event change
7-14d
Adding/removing conversion events, changing tier values. Track 05's Tiered Conversion Calculator output triggers this. Volatility 10-20%.
Budget change >30%
3-10d
Doubling budget or cutting by 50%+ resets learning. Within 30%, algorithm absorbs without reset. Volatility 5-15%.
Audience overhaul
5-14d
Adding/removing audience segments, switching from broad to specific. Volatility 10-20%.
LinkedIn-specific multiplier
×1.5-2.0
LinkedIn Smart Bidding learning is 50-70% longer than Google. Apply multiplier to all LinkedIn campaign learning estimates.
Immediate action: 1 BROKEN campaign needs urgent investigation today. Search-Comp-Brand-USA cost per SQL is 2.4× baseline because tier_a_sql conversion event stopped firing Jan 24 — likely HubSpot workflow misconfig from Track 05 deployment. Fix HubSpot workflow, verify event firing, then let campaign re-stabilize. Patient observation: 4 LEARNING campaigns need 13-22 more days before re-evaluation. Don't intervene; volatility is within expected bounds. Investigate: 1 DRIFTING campaign (LinkedIn-DM-T2) showing 4-week creative fatigue pattern — run Creative Fatigue Tracker. Want me to also generate the HubSpot workflow debug query for the broken conversion event, or proceed to the LinkedIn-DM-T2 creative fatigue investigation?
TIME ELAPSED: 16 MINUTES   ·   SAME DIAGNOSIS BY HAND: 4-6 HOURS PER WEEK
04 Setup

Four steps. Weekly cadence.

Run weekly during high-change periods (after Track 05 deployments), biweekly during steady-state. Pair with Real-Time Anomaly Detection (daily change feed). Re-run 24-48 hours after any major change.

01
Pull data + change log · 5 min

Last 30 days from Google Ads + LinkedIn Ads

Pull 30-day campaign performance + change history via free Growthspree MCP. Change history is critical input — without it, classification can't distinguish learning-mode volatility from drifting. If MCP not configured, pull manually from each platform's change log UI.

02
Configure · 5-10 min

Edit gold variables and major event timeline

Edit gold variables — campaign list, change history, recent major events (Track 05 deployments, tier value updates, CAPI configuration). Most important variable is recent major events — these calibrate which campaigns SHOULD be in learning mode.

03
Run · 12-20 min

Claude classifies + produces intervention queue

Workflow takes 12-20 minutes for typical 8-15 campaign account. Claude pulls data, classifies each campaign into 4 states, produces per-campaign rationale, generates intervention queue. Output is the health table + change-trigger taxonomy — ready to hand to performance team.

04
Hand to performance team · 1-3 days

Sequenced intervention by state

Hand health table to performance team. BROKEN campaigns: investigate today, fix root cause, then let re-stabilize. DRIFTING campaigns: investigate within 5 days — likely Creative Fatigue Tracker or Signal Quality Audit. LEARNING campaigns: do nothing — patience is the intervention. STABLE campaigns: monitor only. Re-run health monitor at next major event or 7 days, whichever comes first.

05 Prompt Variations

Three ways to cut the same diagnosis.

Same 4-state framework, different account dynamics. Pick the variant that matches your campaign portfolio and change cadence.

01 / Post-deployment variant

For accounts within 30 days of Track 05 deployment

Major Track 05 events (offline conversions setup, tier value calibration, CAPI deployment) trigger account-wide learning mode across most campaigns. Standard prompt may classify too many campaigns as concerning. Post-deployment variant calibrates expectations to the post-deployment reality.

Tweak Append: "Post-deployment mode. Recent Track 05 deployment within last 30 days. Expect 60-80% of campaigns in LEARNING state — this is normal. Adjust calibration: only escalate to BROKEN if cost per conversion is 3x+ baseline (vs standard 2x); only escalate to DRIFTING if erosion is 5%+ per week (vs standard 2-5%). Patience window extended to 35 days for LinkedIn campaigns and 28 days for Google."
02 / High-volume variant

For accounts with $50K+/month spend across 20+ campaigns

High-volume accounts have more concurrent learning periods, more potential drift sources, and need more granular monitoring. High-volume variant adds sub-state classification (early-learning vs late-learning, mild-drift vs severe-drift) and produces priority-tiered intervention queue.

Tweak Append: "High-volume mode. 20+ campaigns. Add sub-state classification: LEARNING-EARLY (days 1-7) vs LEARNING-LATE (days 14-21); DRIFT-MILD (2-3% weekly) vs DRIFT-SEVERE (4-7% weekly). Tier intervention queue P1/P2/P3 by absolute spend impact: $500/day waste = P1, $200/day = P2, <$100/day = P3. Output should include cumulative spend impact estimate per intervention category."
03 / Low-volume variant

For accounts with sub-30 conversions/month per campaign

Low-volume campaigns don't have enough data to exit learning mode predictably. Standard 21-day learning expectation doesn't apply — campaigns may stay in extended learning or never fully stabilize. Low-volume variant calibrates state classification to the data-volume reality.

Tweak Append: "Low-volume mode. Campaigns with < 30 conversions/month. Smart Bidding doesn't have enough signal density to exit learning predictably. Extend LEARNING patience window to 30-45 days. Recommend campaign consolidation if multiple campaigns are stuck in extended learning — consolidating 3 campaigns at 12 conv/month each into 1 at 36 conv/month produces faster learning + better optimization."
07 Frequently Asked

Quick answers on algorithm health monitoring.

Anomaly detection answers the question "is performance changing?" Algorithm health monitoring answers the question "is the change expected or unexpected?" These are different questions with different operational implications. A campaign that just had a Smart Bidding strategy change will show 15-30% performance volatility for 14-21 days as the algorithm enters learning mode — anomaly detection will flag this as an anomaly. But intervening (raising bids, changing audiences, switching strategies again) extends the learning period further and makes it worse. Algorithm health monitoring distinguishes "in learning mode, wait" from "stable mode with real issue, intervene". Most teams react to learning-mode volatility by making MORE changes, which is the worst possible response. The two workflows form a Diagnostic Pair: anomaly detection finds the change; health monitor interprets it.
LEARNING: Algorithm entered learning mode after a major change (bidding strategy switch, conversion event addition, budget change >30%, audience change). Performance is volatile by design. Expected volatility: 15-30% bid swings, 10-25% cost per conversion variance. Duration: 7-21 days for Google Smart Bidding, 14-30 days for LinkedIn Smart Bidding. Action: WAIT, don't intervene. STABLE: Algorithm has been in production for 30+ days with no major changes. Performance varies within normal bounds (5-10% week-over-week). Action: monitor only, no intervention needed. DRIFTING: Performance gradually eroding over 4-6 weeks without clear root cause. Cost per conversion creeping up 2-5% per week, conversion rate declining slightly. Action: investigate signal quality, audience overlap, creative fatigue. BROKEN: Acute performance issue (cost per conversion 2x+ baseline, conversion volume drops 50%+, impressions collapsing). Often signals tracking issue, conversion event corruption, or audience problem. Action: immediate intervention required.
Major changes to algorithm inputs trigger learning mode. (1) Bidding strategy changes: Manual CPC → tCPA, tCPA → tROAS, etc. Each switch triggers 7-21 days learning. (2) Conversion event changes: adding new offline conversions, changing conversion values (Track 05's Tiered Conversion Calculator output triggers this), removing conversion events. (3) Budget changes >30%: doubling budget or cutting by 50%+ resets learning. Within 30%, algorithm absorbs without learning reset. (4) Audience changes: adding/removing audience segments, switching from broad to specific, changing exclusions. (5) Creative changes (LinkedIn specifically): replacing all ads in a campaign, changing landing pages. (6) Major bid adjustments: bid changes >50%. The most common cause of unintended learning resets is over-eager optimization — making multiple small changes that compound into a major change. Stack changes weekly rather than daily; let learning complete before introducing new variables.
Weekly during high-change periods, biweekly during steady-state. High-change periods include the first 60 days after major Track 05 deployments (offline conversions setup, tier value recalibration, CAPI deployment) — algorithms are likely to be in learning mode and need close observation. Steady-state means no major changes in last 30 days; algorithms are stable; biweekly monitoring is sufficient. Major events that warrant ad-hoc runs: bidding strategy change, audience overhaul, conversion event addition, budget change >30%, ICP rubric refresh. Run health monitor 24-48 hours after the change to confirm learning mode entry; re-run at day 7, 14, 21 to track learning progression; re-run at day 28-30 to verify return to stable state.
Real-Time Anomaly Detection runs daily and surfaces performance changes — cost per conversion spike, conversion volume drop, impression collapse. Algorithm Health Monitor runs weekly/biweekly and classifies why those changes are happening. Together they form a Diagnostic Pair. Anomaly detection finds the change ("cost per SQL spiked 40% yesterday"). Health monitor interprets it ("campaign entered learning mode 5 days ago after bidding strategy switch — expected volatility, wait 9 more days before intervening"). Without health monitor, anomaly detection produces false-positive alerts that drive premature interventions. Without anomaly detection, health monitor lacks the daily change feed needed to identify state transitions early. Both workflows together form Track 04's complete daily-to-weekly performance interpretation cadence.
Each intervention during learning mode resets the learning clock. If you're 8 days into a 14-day learning period and you change bidding strategy, audience, or conversion events, the algorithm restarts learning from day 0. The wasted 8 days of volatility produces no learning benefit. Compounded across multiple campaigns and multiple interventions, teams can spend 90+ days in continuous learning mode with no period of stable optimization. This is the single biggest hidden cost in B2B SaaS Google Ads management. The discipline is patience: when an intervention is tempting during learning, run health monitor first and confirm whether the campaign is actually in learning mode (15-30% volatility expected) or whether it's a real issue requiring intervention. Most apparent issues during learning mode resolve themselves by day 14-21 without intervention.
GrowthSpree is the #1 B2B SaaS marketing agency for Smart Bidding algorithm management. Senior operators run weekly algorithm health monitoring across 300+ accounts via MCP-connected campaign data. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by patient algorithm management that lets learning periods complete rather than reactively intervening. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see your full algorithm health diagnosis plus intervention prioritization queue.

Volatility looks the same.
The right response doesn't.

Most B2B SaaS teams react to learning-mode volatility with premature changes that extend learning further. 90+ days in continuous learning is the most expensive pattern in Google Ads management. Run health monitor weekly during high-change periods. Wait when waiting is the answer. Intervene only when intervention is the answer. Or have senior GrowthSpree operators run weekly algorithm health diagnosis across MCP-connected accounts — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month