A copy-paste Claude prompt that runs head-to-head comparison searches across Google and AI engines for every pair of you-vs-competitor and competitor-vs-you, classifies each query by visibility status, and prioritizes which vs-pages to build first. Vs-comparison queries convert at 12-25% to demo — buyers run them at the moment of final shortlist selection. Be invisible there and you lose the deal before sales hears about it.
A B2B SaaS buyer narrows their evaluation to two finalists — yours and a competitor's. They Google "Competitor vs YourBrand." The first organic result is the competitor's own "Competitor vs YourBrand" page where they cherry-pick features they win and bury the ones they lose. The buyer reads it for 6 minutes. By the time they get to your site (if they ever do), the framing is already set against you. You lost the deal during a search you didn't know was happening.
This pattern repeats for every competitor pair. Most B2B SaaS teams have built 1-2 vs-pages — usually for the most-asked-about competitor, in only one direction (YourBrand vs Competitor). The reverse direction (Competitor vs YourBrand) gets entirely written by the competitor. Other competitor pairs go entirely unaddressed. The vs-page count is small not because the work is hard but because no one has surveyed the full battery of vs-comparison queries that buyers actually run.
This workflow runs the full survey. For your top 6-10 competitors, in both directions. Claude searches Google and AI engines for every "YourBrand vs X" and "X vs YourBrand" query, classifies each by visibility state (You Win / Shared / Competitor Owns / Open), sizes the search opportunity, and outputs a prioritized production queue. Hand the queue to the Alternatives Page System workflow, ship the pages, watch the SERP positions flip over the next 8-12 weeks.
The gold variables — your brand, category, and the competitor list — are the parts you edit. No MCP connector required; this workflow runs on Claude's web search alone.
web_search only — same approach as the AI Citation Gap Finder. The output feeds directly into the Alternatives Page System workflow as the page brief queue. Run vs-Comparison Gap Finder first, then Alternatives Page System for each top-priority gap.Sample output for a hypothetical mid-market B2B SaaS brand competing against 5 named alternatives. 10 queries scanned (5 competitors × 2 directions), classified by state, with production priority order.
Running the vs-comparison gap scan now. 10 queries to check across 5 competitors in both directions.
+ 7 more searches across remaining competitor pairs
10 queries scanned across 5 competitor pairs. Distribution: 2 You Win · 3 Shared · 4 Competitor Owns · 1 Open. Most urgent finding: Competitor B owns BOTH directions of the YourBrand-vs-CompetitorB comparison with their own page ranking #1. They're writing the narrative against you and you're entirely absent.
Direction: YourBrand vs Competitor BDirection: Competitor A vs YourBrandDirection: Competitor D vs YourBrandDirection: Optimize, not buildDirection: Defer, monitorSERP positions for vs-comparison queries shift over weeks-to-months as competitors ship new pages. Run quarterly, hand the queue to the Alternatives Page System workflow, ship the pages.
Pull from sales call notes, lost-deal data, prior Citation Gap Finder runs, and the alternatives section of your G2/Capterra listings. Order by frequency in lost-deal mentions if you have that data — the competitors that show up most in late-stage deals are the ones whose vs-pages matter most. Save the list as a markdown file you reuse each quarter.
Copy the prompt from section 02. Edit the gold variables — your brand, category description, existing vs-pages already live, and the competitor list. The existing vs-pages field matters — without it, Claude may recommend building pages that already exist. List both URLs and direction (e.g. "Acme vs Competitor A — yourbrand.com/vs/competitor-a").
Each pair generates 2 queries (one per direction). For 6 competitors, that's 12 web searches. For 10 competitors, 20 searches. Claude runs them in sequence and classifies each. The full scan takes 3-5 minutes of compute time. No MCP connectors are required — same as AI Citation Gap Finder, this runs on Claude's web search alone.
Copy the production queue from this workflow's output and paste into the Alternatives Page System workflow as the page brief input. Each top-priority gap becomes one page brief. Run quarterly — the cadence aligns with content production velocity (most B2B SaaS teams ship 1-3 vs-pages per month, so a quarterly re-scan keeps the queue full without wasted analysis).
Run Alternatives Page System →Same vs-query foundation, different scope. Pick the one that matches your category dynamics.
For categories where AI engines drive more vs-comparison consideration than Google. Same methodology but with Claude querying Perplexity, ChatGPT, and Gemini directly to see how each AI engine answers the vs-comparison question. Surfaces AI-specific gaps that Google search may not show.
Beyond pure "X vs Y" queries, also scan modifier variants: "X vs Y pricing," "X vs Y reviews," "X vs Y features," "X or Y for [use case]." These often have different SERP patterns than the bare comparison query. Useful for high-traffic categories where the modifier variants have meaningful volume of their own.
For categories where buyers commonly compare three options (most established categories). Three-way queries are high-volume and almost always third-party-dominated — adding "X vs Y vs Z" brand-owned content to the SERP is rare and high-leverage. Treat each three-way SERP as a separate gap.
Compile your top 6-10 competitors, run the gap finder, hand the production queue to the Alternatives Page System workflow, ship the pages over 90 days. Within 8-12 weeks, the SERP positions flip on the highest-priority pairs. Or have senior GrowthSpree operators run the quarterly gap finder, ship the vs-pages, and pair them with paid competitor conquesting — the same operating motion run across 300+ B2B SaaS accounts.