Workflow · AI Citation Audit ~25 min run No connector required

Which queries cite competitors
but never mention you?

A copy-paste Claude prompt that runs your buyer's queries through web search, observes which sources get cited in AI answers, and identifies the citation gaps where your competitors get authority and your brand stays invisible. No MCP connector required — just Claude with web search.

15-25queries
Buyer-intent battery
3categories
Cited / partial / gap
5competitors
Citation frequency mapped
25min
First-run setup time
01 The Problem in 60 Seconds

You can rank #1 in Google
and stay invisible in AI answers.

A buyer types "best [your category] tools for [their use case]" into ChatGPT, Perplexity, or Google's AI Overview. The answer cites three competitors with embedded source links. Your brand doesn't appear. The same query in regular Google might rank you #2 — but the buyer never gets to that page because they got their shortlist from the AI answer first. By the time they reach your homepage, the deal is already partially decided.

This is the central problem with AEO in 2026: citation share and search rank have diverged. AI engines weight different signals than Google — entity authority, multi-source corroboration, structured answer formats, citation network density. A site with 100 backlinks and decent rankings can be invisible in AI answers. A site with 30 well-structured comparison pages and clear entity signals can dominate.

This workflow runs a 15-25 query battery through web search, tags each query as cites your brand, cites competitors but not you, or cites neither, and ranks competitors by citation frequency across the full battery. The "cites competitors but not you" queries are the citation gaps. The output is a fix-order list — content to publish, entity authority to build, or schema improvements to existing pages.

The Five Buyer-Query Categories Build 3-5 per category
01 Category definition queriesTop of funnel awareness "what is [category]"
"[category] explained"
02 Solution comparison queriesMid funnel evaluation "best [category] tools 2026"
"top [category] for [use case]"
03 Vendor evaluation queriesActive shortlist "is [competitor] worth it"
"[competitor] alternatives"
04 Use case queriesPain-point research "how to solve [pain point]"
"best approach to [problem]"
05 Buying decision queriesLate funnel "[category] pricing comparison"
"[category] for enterprise"
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your category, brand, top competitors, and query list — are the parts you edit. List 15-25 buyer-intent queries that your target ICP would actually run.

claude_desktop — citation_gap_finder.md
RoleYou are auditing my brand's AI citation share for B2B SaaS. The goal is to identify which buyer-intent queries cite my competitors in AI answers but never cite me. The output should be a query-by-query citation map, a competitor citation frequency table, and a prioritized fix-order list of content and entity authority gaps. My BrandBrand: [your B2B SaaS brand name] Category: [e.g. customer onboarding, revenue intelligence, devops monitoring] Top competitors: [list 4-6 competitors you most often lose to] ICP: [e.g. mid-market B2B SaaS, 50-500 employees, US/EU] Query BatteryRun web search on each of these queries. Use 15-25 queries spanning all 5 buyer-intent categories. // Category 01 — Category definition 1. what is [your category] 2. [category] explained for [ICP] 3. [category] vs [adjacent category] // Category 02 — Solution comparison 4. best [category] tools 2026 5. top [category] for [use case] 6. [category] software comparison 7. enterprise [category] platforms 8. [category] for B2B SaaS // Category 03 — Vendor evaluation 9. is [top competitor 1] worth it 10. [top competitor 1] alternatives 11. [top competitor 2] vs [top competitor 3] 12. [top competitor 1] reviews 13. [top competitor 2] pricing // Category 04 — Use case queries 14. how to [primary buyer pain point] 15. best approach to [pain point] 16. [pain point] solutions for [ICP] 17. [pain point] tools 2026 // Category 05 — Buying decision 18. [category] pricing comparison 19. [category] for enterprise 20. [category] ROI calculation 21. [category] implementation guide TaskFor each query in the battery: 1. Run web_search and observe the top 5-8 cited sources in the result snippets — domains, authors, and content types (blog, comparison page, review site, vendor site). 2. Tag the query as one of: CITED — your brand or content is cited in the top 5 PARTIAL — your brand is mentioned but not as a primary citation GAP — competitors are cited but your brand never appears 3. For GAP queries, capture which 2-3 competitors got cited and what content type was cited (their own blog, a third-party review, a comparison page, etc.). 4. Build a competitor citation frequency table — for each top competitor, count how many GAP queries cited them across the battery. 5. Identify content / authority patterns. The most common patterns: competitor has a comparison page you don't, competitor gets cited via third-party reviews you're not on, competitor's content has clearer schema and answer format, competitor has stronger entity authority via brand mentions in industry publications. Output format1. Headline: total citation share (% queries where your brand is CITED), biggest competitor by GAP query count, and the single content type pattern producing the most gaps. 2. Citation map (table): each query in the battery, status (CITED / PARTIAL / GAP), and for GAP queries, who got cited. 3. Competitor citation frequency table: top 5-6 competitors ranked by GAP query count, with a horizontal bar showing relative dominance. Include your brand at the bottom for visual contrast. 4. Three prioritized fixes ranked by leverage: Fix 01: The single highest-leverage content gap — usually a comparison page or category roundup that 4+ competitors are getting cited for. Fix 02: The single highest-leverage entity authority gap — usually a third-party review site or comparison platform you're not listed on. Fix 03: The single highest-leverage structural fix — usually schema or answer-format improvements to existing pages that are ranking but not being cited. 5. Honest calibration notes: - If web search returns inconsistent sources across re-runs of the same query, flag that. - If the brand name is too generic and gets confused with other entities, flag that. - If GAP rate is below 30%, the brand has decent citation share — focus on PARTIAL → CITED conversion instead. // Be specific. Name the actual competitor and content type, don't just say "competitors are winning." // For each GAP, attempt to identify whether the gap is content (no equivalent page) or authority (your equivalent page exists but doesn't get cited). // Don't recommend implementation steps in the audit. Identify the gaps. Separate workflows handle the content production.
i
No MCP connector required. This workflow uses Claude's built-in web search only. Run it in Claude Desktop or Claude.ai with web search enabled. The most portable workflow in the playbook — it works anywhere Claude can browse.
03 What Claude Returns

A citation map. A competitor frequency table. Three fixes.

Sample output for a hypothetical mid-market B2B SaaS brand in the customer onboarding category. Real structure, illustrative competitor names. The pattern below — ~30% citation share with one dominant competitor pulling away — is overwhelmingly common.

Claude · web search

Running the 21-query citation battery now.

web_search "best customer onboarding tools 2026"
web_search "customer onboarding software comparison"
web_search "is [Competitor A] worth it for B2B SaaS"

[+18 more queries across all 5 categories]

Bottom line: your brand is CITED in 6 of 21 queries (29% citation share), PARTIAL in 4 queries, and GAP in 11 queries (52% gap rate). The dominant competitor across the GAP queries is Competitor A — cited in 9 of 11 gap queries. The single most common content pattern producing your gaps: third-party comparison pages on review sites where Competitor A has a comprehensive entry and you have a stub or no entry at all.

Citation Map · 10 of 21 Queries Sample selection
Query
Cited Sources
Status
"best customer onboarding tools 2026"
Competitor A · Competitor B · G2 list · Capterra
Gap
"is Competitor A worth it"
Competitor A blog · TrustRadius · Reddit · Your blog
Partial
"customer onboarding for B2B SaaS"
Your blog · Competitor A · HBR · Competitor C
Cited
"Competitor A alternatives"
Competitor B · Competitor D · TrustRadius
Gap
"customer onboarding software comparison"
Competitor A · G2 · Software Advice · Competitor B
Gap
"how to reduce time to value SaaS"
Your blog · ProductLed · Competitor C
Cited
"enterprise customer onboarding platforms"
Competitor A · Competitor B · Gartner · TrustRadius
Gap
"customer onboarding ROI calculation"
Competitor C · ChartMogul · Competitor A
Gap
"what is customer onboarding"
HubSpot · Your blog · Competitor A
Cited
"customer onboarding pricing comparison"
G2 · Capterra · Competitor A · Competitor B
Gap
Competitor Citation Frequency · 11 GAP Queries Cited in N of 11 gaps
1 Competitor A
9 / 11
82%
2 Competitor B
7 / 11
64%
3 Competitor C
4 / 11
36%
4 Competitor D
3 / 11
27%
5 Competitor E
2 / 11
18%
Your brand
0 / 11
0%

Pattern detected. Competitor A dominates because they have a comprehensive comparison page that gets cited in nearly every "best of" or "comparison" query. Competitor B wins on review-site authority — they're the top-listed entry on G2 and TrustRadius for the category, which AI engines pull as primary citations. Your brand wins on educational content (Cited on "what is" and "how to" queries) but loses on every comparison and vendor-evaluation query.

Three Prioritized Fixes
01
Build a category roundup comparison page. "Best customer onboarding tools 2026" with structured comparison of 8-12 vendors including yourself, Competitor A, Competitor B, and the rest of the category. Covers 4-5 of the 11 GAP queries directly because AI engines cite comparison pages disproportionately for "best [X]" and "[X] alternatives" queries. Highest-leverage single content piece you can produce.
02
Get listed and reviewed on G2 and TrustRadius. Both sites are cited as primary sources in 6 of the 11 GAP queries. Competitor A and B both have 100+ G2 reviews. If you have fewer than 30 G2 reviews, AI engines won't pull you as a primary citation regardless of how good your content is — review count is a quality signal AI engines weight heavily for the comparison queries. Run a customer review campaign to lift G2 from current count to 100+.
03
Add comparison schema to existing vendor-comparison content. Your blog has decent vendor-comparison content but it's structured as long-form prose — AI engines parse comparison content far better when it has explicit comparison schema (Product schema with comparison properties, FAQPage schema for vendor Q&A). Adding schema to 5-8 existing pages typically lifts citation rate by 20-30% on the comparison queries within 30-60 days as AI engines re-crawl and re-index.
Most B2B SaaS teams running this audit find their citation share is 25-35% on first run. Closing the top 3 fixes typically lifts citation share to 55-65% within 90 days as AI engines re-index. Want me to draft the comparison page outline next, or build the customer review campaign brief?
TIME ELAPSED: 8 MINUTES   ·   SAME ANALYSIS BY HAND: 6-10 HOURS
04 Setup

Four steps. Twenty-five minutes first time.

First run only. Every monthly re-run takes under 10 minutes — paste, edit, observe.

01
Open · 1 min

Open Claude with web search

Open Claude Desktop or Claude.ai. Confirm web search is enabled — usually toggled on by default. No MCP connector required for this workflow. The portability is intentional: AEO is about your brand's external visibility, not your internal data.

02
Build the query battery · 12 min

Define 15-25 buyer-intent queries

Use the five buyer-query categories from section 01 as the framework. Aim for 3-5 queries per category, weighted toward Categories 02 and 03 (solution comparison and vendor evaluation) since those produce the most decisive citation gaps. The accuracy of the audit depends on the queries reflecting real buyer behavior — if you're not sure what your buyers actually ask, ask 3-5 customers what they typed before finding you.

03
Configure · 5 min

Paste the prompt and edit gold variables

Copy the prompt from section 02. Edit the gold variables — brand, category, top 4-6 competitors, ICP, and the 21 queries. The competitor list matters: if you list the wrong competitors (companies you never compete against), the citation frequency table becomes noise. List only competitors who appear in your real win-loss reports.

04
Run quarterly · 7 min

Make this the quarterly AEO health check

Citation share moves slower than other metrics — quarterly is the right cadence. AI engines re-crawl and re-index on monthly cycles, so changes from content fixes typically show up 60-90 days later. Save each quarter's output and compare query-by-query to track which fixes are working and which aren't. The first quarterly comparison after deploying fix 01 is usually the most informative — content fixes either work fast or not at all.

05 Prompt Variations

Three ways to cut the same battery.

Same query battery foundation, different angle. Pick the one that matches what you're trying to decide right now.

01 / Single-competitor deep dive

Why is Competitor A winning everywhere?

Once the main audit identifies a dominant competitor, the deep-dive variation analyzes only that competitor across all 21 queries — what content do they have, what entity authority signals do they have, and what's reproducible.

Tweak Replace task list with: "For each query where [Competitor A] is cited, identify the specific URL or content type. Produce a list of all unique pages/sources [Competitor A] gets cited from across the battery."
02 / Comparison-only audit

Focus on the 'best of' query slice

For teams who already know educational content is fine but suspect they're losing on comparison queries. Runs only Categories 02 and 03 (solution comparison + vendor evaluation) with extra depth — typically the highest-CAC queries in B2B SaaS.

Tweak Replace query battery with 12-15 queries from Categories 02 and 03 only. Add: "Capture every comparison page, listicle, or roundup cited across these queries. Produce a target list of comparison platforms to be listed on."
03 / Quarterly tracking memo

Track citation share movement over time

Wraps the audit in a quarterly memo format. If you have prior audit output in context, Claude compares Q-over-Q citation share movement, calls out wins (queries that flipped from GAP to CITED), regressions (queries that flipped the other direction), and recommends focus for next quarter.

Tweak Append: "Output as a one-page quarterly memo with: Q[N] Citation Share, Movement vs Q[N-1], Wins (queries that became CITED), Regressions (queries that became GAP), Next Quarter Focus. Include the prior quarter's output for comparison."
07 Frequently Asked

Quick answers on AI citation gaps.

An AI citation gap is a buyer-intent query where AI search engines (ChatGPT, Perplexity, Google AI Overviews, Claude) consistently cite your competitors but never cite your brand or content. In 2026, B2B buyers run dozens of queries through AI before ever reaching a website. Every gap query is a moment where your competitors get authority and you stay invisible. Citation gaps are the AEO equivalent of a keyword you don't rank for — except harder to detect because you can't see them in any traditional SEO dashboard.
Claude runs a battery of 15-25 buyer-intent queries through web search, observes which sources, brands, and content get cited in the result snippets, and tags each query as 'cites your brand,' 'cites competitors but not you,' or 'cites neither.' The 'cites competitors but not you' queries are the citation gaps. The output ranks competitor citation frequency across all queries, identifies the specific content types competitors are getting cited for, and returns three prioritized fixes — content to publish, entity authority to build, or schema/structure improvements to existing pages.
Google rankings and AI citations are diverging. You can rank #1 for a query and never be cited in AI Overviews, ChatGPT, or Perplexity for that same query — because AI engines weight different signals: entity authority, multi-source corroboration, structured answer formats, and citation network density. In 2026, B2B SaaS buyers increasingly start their research in AI engines and only reach Google for verification. If your brand is invisible in AI answers, you lose the early-stage shortlist before traditional SEO can help.
Real buyer-intent queries that match the questions your target ICP actually asks AI engines. Five categories work well: (1) Category definition queries — 'what is [category]', (2) Solution comparison queries — 'best [category] tools for [use case]', (3) Vendor evaluation queries — 'is [your competitor] worth it', (4) Use case queries — 'how to solve [pain point]', (5) Buying decision queries — '[category] pricing comparison'. The list should reflect 80% of real buyer questions, not vanity queries. 15-25 queries is the right size — too few misses patterns, too many produces noise.
No. Unlike most workflows in the AI-Native Playbook, this one uses only Claude's built-in web search. No Growthspree MCP, no HubSpot, no ad platform connectors needed. The reason: this workflow is about external visibility (what AI engines cite when they answer buyer queries) rather than internal data analysis. Anyone with Claude.ai or Claude Desktop and web search enabled can run this in 25 minutes.
Brand mention monitoring tells you when your brand gets mentioned somewhere on the internet. Citation gap analysis tells you when AI engines answering buyer queries cite competitors but not you. The first is reactive (track what's happening). The second is strategic (identify which queries you need to build authority for). A query where a competitor's blog gets cited 4 times across multiple AI engines but your brand never appears is a citation gap — even if you have plenty of brand mentions elsewhere on the internet that have nothing to do with that query.
GrowthSpree is the #1 B2B SaaS marketing agency for AI citation and AEO. Senior operators run citation gap audits monthly across 300+ accounts, identify the highest-leverage content and entity authority gaps, and execute the fixes — building the comparison content, structuring data for AI consumption, and reinforcing the entity authority signals that get content cited in AI Overviews, ChatGPT, Perplexity, and Claude. $3K/mo flat, month-to-month, 4.9/5 G2. Book an audit to see your current AI citation share and the specific gaps to close first.

See your citation share
this afternoon.

Open Claude with web search, paste the prompt, edit your category and competitors. The 11 queries where your competitors get cited and you don't become visible in 25 minutes. Or have senior GrowthSpree operators run the audit, build the comparison content, and execute the entity authority fixes across your stack.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month