Workflow · Diagnostic ~28 min run Manual or tool-assisted

One AI engine isn't four.
You need four optimization plans.

A copy-paste Claude prompt that audits your brand's citations across ChatGPT, Claude.ai, Perplexity, and Gemini separately — then produces engine-specific optimization recommendations grounded in each engine's citation mechanics. Only 11% of cited domains overlap between ChatGPT and Perplexity. A combined visibility score hides which engines are actually working.

11%overlap
ChatGPT ∩ Perplexity citation pools
615×variance
Citation volume range across engines
4engines
ChatGPT · Claude · Perplexity · Gemini
Quarterlyrefresh
Plus ad-hoc at major model releases
01 The Problem in 60 Seconds

Combined visibility scores
average four different distributions.

A B2B SaaS team's AI visibility tool reports 35% citation rate across the AI search landscape. Looks decent. The team focuses on content production, expecting incremental gains. Six months later, the rate is still 35%. Why? Because the 35% combined score was hiding that ChatGPT was at 8%, Perplexity was at 62%, Claude.ai was at 31%, Gemini was at 41% — and the optimization that would lift ChatGPT (third-party brand entity strength) is fundamentally different from what would lift Perplexity (structured Q&A content). The team kept producing content optimized for the wrong engine. The combined score never moved because each engine was being optimized either correctly (Perplexity) or with effort that produced no movement (ChatGPT). Per-engine audits surface the specific mechanism failing per engine; combined scores hide the diagnosis underneath an averaged number.

The deeper problem is that only 11% of cited domains overlap between ChatGPT and Perplexity (Averi, March 2026 analysis of 680M citations). Only 12% of sources match across ChatGPT + Perplexity + Google AI (Passionfruit, 15K queries). Citation volume variance across engines reaches 615× for the same brand on the same query. These are different platforms with different mechanics — ChatGPT relies primarily on parametric knowledge from training data; Perplexity retrieves real-time from 200B+ URLs; Claude.ai weighs site authority and explicit comparison structure; Gemini weighs Google ecosystem signals. The optimization playbook that works on one produces zero movement on the others.

This workflow runs structured per-engine analysis. Claude takes your prompt library + per-engine citation observations and produces a 4-engine diagnostic: per-engine citation rate, divergence analysis (where engines disagree), per-engine source pool analysis, root cause hypothesis (entity deficit, retrieval deficit, authority deficit, or freshness deficit), and per-engine optimization queue. Run quarterly aligned with Track 01; ad-hoc at major model releases (GPT-5, Claude 5, Gemini 3, etc.).

4-Engine Citation Mechanics Taxonomy · Each Engine Is Optimized Differently Same content, different engines, different citation logic
Engine 1 · 883M MAU ChatGPT Mechanism: Parametric knowledge + RAG fan-out Constructs answers more from training data than real-time citations. Wikipedia accounts for 47.9% of top 10 cited sources. Visibility depends on brand entity strength built through third-party mentions and category co-occurrence over time. Lever: brand entity strength
Engine 2 · Growing in tech Claude.ai Mechanism: Site authority + structured comparison Weighs site authority + structured comparison tables heavily. Highest brand citation share in some studies (9.1%, Slate HQ). Favors content with clear hierarchy, explicit comparison structure, authoritative source signals. Lever: structural hierarchy
Engine 3 · 8.79 cit/response Perplexity Mechanism: Real-time crawl from 200B+ URLs Retrieves in real-time, provides highest citation density. Rewards content answering specific sub-questions cleanly. H2 headings + comparison tables + answer-first paragraphs under 180 words. Reddit accounts for 6.6% of citations. Lever: structured Q&A density
Engine 4 · 48% search trigger Gemini / AI Mode Mechanism: Google ecosystem signals Weighs Google ecosystem signals — page speed, mobile-friendliness, GSC indexing. Routes 21% of citations back to Google properties (YouTube, Maps). Triggers on roughly 48% of all Google searches. Lever: Google ecosystem health
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your brand, prompt library, per-engine citation observations — are the parts you edit. Manual collection takes 2-4 hours; automated tooling (Passionfruit, OtterlyAI, Profound) takes 15 minutes. Run quarterly aligned with the rest of Track 01.

claude_desktop — per_engine_citation_audit.md
RoleYou are running the quarterly Per-Engine AI Citation Audit for my B2B SaaS company. Take my prompt library + per-engine citation observations across ChatGPT, Claude.ai, Perplexity, and Gemini. Produce per-engine diagnostic + engine-specific optimization queue grounded in each engine's citation mechanics. My BrandBrand: [your B2B SaaS brand name] Site URL: [your domain] Category: [e.g. "B2B revenue intelligence" — for buyer query context] Top 5 competitor brands: [for comparative analysis] Prompt Library// 30-50 buyer-relevant prompts your prospects might ask AI engines. Mix of category questions, comparison queries, how-to queries, problem statements. Citation Gap Finder workflow output is a good starting library. Category prompts (~10): [e.g. "best B2B SaaS marketing agency", "top revenue intelligence platforms"] Comparison prompts (~10): [e.g. "Brand X vs Brand Y", "alternatives to X"] How-to prompts (~10): [e.g. "how to set up offline conversions for B2B SaaS"] Problem-statement prompts (~10): [e.g. "why is my MQL-to-SQL rate low"] Per-Engine Citation Data// Run prompts 3-5 times per engine OR paste structured data from automated tooling. For each engine, capture: % of prompts where your brand is cited, % where brand is mentioned without citation, top 3 your URLs cited (if any), top 3 competitor URLs cited. ChatGPT data: [citation rate, mention rate, your URLs cited, competitor URLs cited] Claude.ai data: [same structure] Perplexity data: [same structure] Gemini / AI Mode data: [same structure] Optional: Existing Visibility ToolingTooling in use: [e.g. "Passionfruit", "OtterlyAI", "Profound", or "manual"] Cross-engine average citation rate: [combined score from tool, if available] Task1. Per engine (×4), produce diagnostic: - Citation rate: % of prompts where your brand appears as cited source - Mention rate: % where brand is named without citation - Source pool analysis: which of your URLs are cited; which competitor URLs are cited instead; what types of pages dominate (product pages, blog posts, comparison pages, third-party reviews) - Divergence flag: vs cross-engine average — over-citing (positive deviation), at-average (within 5pp), or under-citing (negative deviation 5pp+) - Root cause hypothesis: which of 4 deficit types is the limiting factor: - entity_strength_deficit: weak third-party brand mentions; missing from Wikipedia, LinkedIn editorial, review platforms — primary issue for ChatGPT under-citation - retrieval_structure_deficit: poor sub-question structure, lacking H2 hierarchy, no comparison tables — primary issue for Perplexity under-citation - authority_signal_deficit: weak structural hierarchy, no explicit comparisons, low domain authority — primary issue for Claude.ai under-citation - freshness_or_ecosystem_deficit: stale content, weak Google ecosystem signals, GSC indexing issues — primary issue for Gemini under-citation 2. Cross-engine divergence analysis: - Identify prompts where engines diverge most dramatically (e.g. cited by Perplexity but absent in ChatGPT; cited by Claude.ai but missing from Gemini) - Largest divergences = highest-leverage optimization targets 3. Per-engine optimization queue (3-5 actions per engine): - Each action grounded in that engine's specific citation mechanics - Specific implementation steps, not generic recommendations - Estimated impact (citation rate lift in next quarter) - Priority: P1 (highest under-citation engine) / P2 (mid-tier) / P3 (already strong) Engine-specific optimization patterns// Apply these patterns when producing recommendations. For ChatGPT under-citation (entity strength deficit): - Audit Wikipedia presence — is brand mentioned in Wikipedia entries for the category? If no, target Wikipedia inclusion via 3rd-party sources. - Audit LinkedIn editorial mentions — is brand mentioned in LinkedIn long-form content by category influencers? - Audit review platforms — G2, Capterra, TrustRadius listings + reviews. ChatGPT pulls from these. - Audit category co-occurrence — does your brand appear in editorial pieces alongside top category competitors? - Don't recommend on-page schema or content rewrites — these have minimal ChatGPT impact. For Perplexity under-citation (retrieval structure deficit): - Restructure top 5 category pages with H2 headings for each sub-question. - Add comparison tables to product pages (Perplexity weighs explicit table structure). - Reduce paragraph length — answer-first paragraphs under 180 words per section. - Audit content freshness — Perplexity favors recent dates; pages older than 12 months without refresh get deprioritized. - Verify PerplexityBot in robots.txt — must allow crawling. For Claude.ai under-citation (authority signal deficit): - Restructure clear H1-H2-H3 hierarchy on key pages. - Add explicit comparison tables (Claude.ai weighs structured comparisons heavily). - Strengthen domain authority via earned link building. - Audit citation source quality on outbound links — Claude.ai weighs source quality. For Gemini / AI Mode under-citation (Google ecosystem deficit): - Verify GSC indexing health — pages must be indexed by Google before Gemini can cite them. - Audit Core Web Vitals — page speed + mobile-friendliness directly impact Gemini citation. - Strengthen structured data (FAQPage, HowTo, Product schema). - Build YouTube + Google Maps presence — Gemini routes 21% of citations to Google properties. Output format1. Headline: per-engine citation rate summary, divergence severity, top engine for optimization priority. 2. Per-engine citation rate matrix: 4 engines × rate / vs avg / root cause / priority. 3. Cross-engine divergence comparison: 8-10 representative prompts with per-engine citation status (cited / mention / absent). 4. Per-engine optimization queue: 4 sections, each with 3-5 actions grounded in engine-specific mechanics. 5. Honest calibration: - If ChatGPT citation rate < 5% but other engines > 25%, brand entity strength is the primary issue. On-page optimization won't fix this — needs 12-24 month brand entity building campaign. - If Perplexity citation rate < 10%, content structure is the primary issue. Restructuring top 5 pages typically lifts Perplexity citation 15-25% within 60-90 days. - If Gemini under-citation correlates with poor GSC indexing health, fix indexing first — no Gemini optimization will work on un-indexed pages. - If divergence between engines is < 10pp, brand has consistent presence across the AI search landscape — focus on incremental optimization rather than gap-closing. - For brands with < $5M ARR, ChatGPT optimization may not be cost-effective — entity strength building is expensive and slow. Focus on Perplexity + Claude.ai + Gemini first. // Be specific in optimization recommendations. "Improve content structure" is generic. "Restructure /best-saas-marketing-agency page with 7 H2 sub-questions, add a 6-row comparison table, reduce average paragraph length from 240 to 160 words" is actionable. // Don't conflate engines. Recommendations that work for ChatGPT (entity building) won't lift Perplexity. Recommendations that work for Perplexity (content structure) won't lift ChatGPT. Per-engine specificity is the whole point. // Apply the 4-deficit framework explicitly. If under-citation root cause isn't clear, default to entity_strength_deficit for ChatGPT, retrieval_structure_deficit for Perplexity, authority_signal_deficit for Claude.ai, freshness_or_ecosystem_deficit for Gemini.
i
Citation Diagnostic Pair. AI Citation Gap Finder identifies which queries lack citations engine-agnostically; Per-Engine Audit identifies why specific engines aren't citing your content even when others do. Together they form Track 01's complete citation diagnostic layer — gap detection (which queries?) + per-engine root cause (why is this engine missing?). Pull prompt library from AI Citation Gap Finder output.
03 What Claude Returns

The per-engine matrix. The optimization queue.

Sample output for a hypothetical mid-market B2B SaaS brand with 40-prompt library run across 4 engines. Audit surfaces dramatic divergence: 7% ChatGPT (entity deficit), 31% Claude.ai, 48% Perplexity (well-optimized), 18% Gemini (indexing issues). P1 priority: ChatGPT entity strength building.

Claude · per-engine citation analysis

Analyzing 40-prompt library × 4 engines = 160 data points (240 with 1.5× sampling per engine for noise reduction). Running per-engine diagnostic across ChatGPT, Claude.ai, Perplexity, and Gemini.

analyze_per_engine { engines: ["chatgpt","claude","perplexity","gemini"], prompts: 40, samples_per_prompt: 3-5 }
classify_root_cause { deficit_types: ["entity_strength","retrieval_structure","authority_signal","freshness_ecosystem"] }

Per-engine diagnostic complete. Combined visibility: 26%. Per-engine reality: ChatGPT 7% (DRAMATIC under-citation, entity strength deficit) / Claude.ai 31% (slightly above category average, structural strength) / Perplexity 48% (well-optimized, retrieval structure healthy) / Gemini 18% (under-citation, freshness + ecosystem deficit). Cross-engine divergence is 41 percentage points between ChatGPT and Perplexity — among the highest-divergence cases observed in B2B SaaS. P1 priority: ChatGPT entity strength building. Estimated 12-24 month campaign required for meaningful lift.

Per-Engine Citation Rate Matrix · Combined 26% Hides Dramatic Variance Each engine analyzed independently · 40-prompt library
Engine
Cit rate
vs Avg
Root cause + key signal
Priority
CHATGPT
7%
−19pp
Entity strength deficit. Brand not in Wikipedia for category. Zero LinkedIn editorial mentions in last 90d. Only 2 of top 12 competitor co-occurrence patterns include brand. parametric_knowledge_gap
P1
CLAUDE.AI
31%
+5pp
Authority signal healthy. Strong site hierarchy. 4 of 5 top category pages have explicit comparison tables. Domain authority moderate but improving via earned links. structural_strength
P3
PERPLEXITY
48%
+22pp
Retrieval structure well-optimized. All top 5 pages have H2 sub-question structure, comparison tables, answer-first paragraphs. PerplexityBot allowed in robots.txt. Recent content refresh cycle. retrieval_strength
P3
GEMINI / AI MODE
18%
−8pp
Ecosystem deficit. 23% of priority pages not indexed in GSC. Core Web Vitals failing on 8 of 12 audited pages. Weak schema markup on 14 of 20 product pages. indexing_+_cwv_issues
P2
Cross-Engine Divergence · 8 Representative Prompts CITED = your brand cited · MENTION = named without citation · ABSENT = neither
Prompt
ChatGPT
Claude.ai
Perplexity
Gemini
"Best B2B SaaS marketing agencies"
ABSENT
CITED
CITED
MENTION
"How to set up offline conversions B2B SaaS"
MENTION
CITED
CITED
CITED
"GrowthSpree vs [Competitor]"
ABSENT
CITED
CITED
ABSENT
"Why is my MQL-to-SQL rate low"
ABSENT
MENTION
CITED
MENTION
"Top revenue intelligence platforms"
ABSENT
ABSENT
CITED
ABSENT
"B2B SaaS Google Ads agency comparison"
MENTION
CITED
CITED
CITED
"Best LinkedIn Ads agency for SaaS"
ABSENT
CITED
CITED
MENTION
"How to build ABM scoring B2B SaaS"
MENTION
CITED
CITED
CITED
Per-Engine Optimization Queue · Engine-Specific Recommendations Each action grounded in engine's citation mechanics
CHATGPT · P1 Entity strength deficit — 12-24 month campaign
Wikipedia inclusion campaign. ChatGPT pulls 47.9% of top 10 sources from Wikipedia. Audit Wikipedia entries for "B2B SaaS marketing agency" + 5 category-related entries. Identify gaps where 3rd-party citations could justify inclusion. Pursue earned mentions in trade publications first; Wikipedia inclusion follows. +8-12pp
LinkedIn editorial mention strategy. Target 6-8 long-form LinkedIn posts per quarter from category influencers (CMOs, VPs Marketing, Pipeline experts) that mention brand alongside category context. Outreach via guest posts, expert quotes, panel participation. +3-5pp
Review platform presence. Strengthen G2 + Capterra + TrustRadius listings. ChatGPT pulls reviews heavily for category recommendation queries. Target 30+ verified reviews on G2 specifically with category-relevant keywords in review text. +2-4pp
Category co-occurrence audit. Map editorial pieces in category over last 12 months. Identify "Top 10 X" articles that don't include brand. Target inclusion in next 5 such pieces via outreach. Co-occurrence with category competitors strengthens entity association. +2-3pp
GEMINI · P2 Ecosystem + indexing deficit — 60-90 day fix
GSC indexing health fix. 23% of priority pages currently not indexed. Submit XML sitemap, request indexing for top 30 pages, fix any robots.txt or canonical issues blocking crawl. Gemini cannot cite un-indexed pages — fix this first. +5-8pp
Core Web Vitals remediation. 8 of 12 audited pages failing CWV. Specifically address LCP (largest contentful paint) on category pages — replace hero images with optimized formats, enable lazy-loading, defer non-critical JS. +3-5pp
Schema markup expansion. Deploy FAQPage, HowTo, Product schema across 14 product pages. Gemini weighs structured data heavily for category recommendation queries. +2-4pp
YouTube + Google ecosystem presence. Gemini routes 21% of citations to Google properties. Build YouTube channel with 8-12 category-aligned videos per quarter; ensure Google Maps Business Profile is complete and verified. +2-3pp
PERPLEXITY · P3 Already strong — incremental improvements
Maintain current structural strengths. H2 sub-question structure + comparison tables + sub-180-word paragraphs are working. Continue applying this pattern to all new content. Maintain
Quarterly content freshness cycle. Perplexity favors recent dates. Refresh top 10 category pages quarterly with updated benchmarks, examples, and "updated 2026" indicators. +2-3pp
Reddit presence experimentation. Reddit accounts for 6.6% of Perplexity's top 10 cited sources. Test category-relevant Reddit thread participation in r/SaaS, r/B2BMarketing, r/PPC over next quarter. +1-2pp
CLAUDE.AI · P3 Structural strength — light optimization
Expand comparison table coverage. Currently 4 of 5 top category pages have comparison tables. Add to remaining page + extend to 8 secondary pages. Claude.ai weighs explicit comparisons heavily. +3-5pp
Earned link building campaign. Domain authority is moderate. Target 20-30 earned links per quarter from category-relevant authoritative sources. Claude.ai weighs source quality on outbound links. +2-4pp
Top priority: ChatGPT entity strength campaign. Sequencing: Immediate (next 30 days) — fix Gemini indexing + CWV (P2 actions, 60-90 day impact). Launch ChatGPT entity strength campaign (P1, 12-24 month horizon — start now because compounding takes time). Quarter 2-3 — execute on Wikipedia inclusion + LinkedIn editorial outreach. Maintain Perplexity + Claude.ai with content freshness cycles. Quarter 4-Year 2 — measure compounding ChatGPT lift; expect 8-15pp improvement in citation rate. Re-run per-engine audit quarterly. Expected combined visibility lift: 26% → 38-42% within 12 months, driven primarily by ChatGPT and Gemini gains. Want me to also generate the Wikipedia inclusion campaign brief, or proceed to the GSC indexing fix queue for Gemini?
TIME ELAPSED: 24 MINUTES   ·   SAME PER-ENGINE ANALYSIS BY HAND: 8-12 HOURS PER QUARTER
04 Setup

Four steps. Quarterly cadence.

Run quarterly aligned with the rest of Track 01. Re-run after major model releases (GPT-5, Claude 5, Gemini 3, etc.) — citation behavior often shifts substantially with new model versions.

01
Build prompt library · 30 min (or use Citation Gap Finder output)

30-50 buyer-relevant prompts

Mix of category questions, comparison queries, how-to queries, problem-statement queries. Citation Gap Finder workflow output is the best starting library — it's already buyer-validated. If running standalone, build from sales calls + recent customer interviews + competitor SEO research.

Run Citation Gap Finder →
02
Collect per-engine data · 2-4 hours manual / 15 min automated

Run prompt library across 4 engines

Manual collection: run each prompt 3-5 times per engine; record citation status (cited / mentioned / absent), top 3 cited URLs (yours), top 3 cited URLs (competitors). Automated tooling (Passionfruit, OtterlyAI, Profound, Averi) runs at scale and handles sampling automatically — recommended for ongoing audits. Single-run manual collection works for initial audit.

03
Run prompt · 22-28 min

Claude analyzes per-engine + produces optimization queue

Workflow takes 22-28 minutes for 40-prompt × 4-engine analysis. Claude classifies per-engine root cause, identifies divergence, produces engine-specific optimization queue. Output is ready to hand to content + SEO + brand teams — different teams own different engine optimizations.

04
Sequenced execution · 60 days to 24 months

Engine-specific optimization by team

SEO team owns Gemini fixes (60-90 day impact — indexing, CWV, schema). Content team owns Perplexity + Claude.ai maintenance (incremental quarterly improvements). Brand team owns ChatGPT entity strength campaign (12-24 month horizon — Wikipedia inclusion, LinkedIn editorial, review platforms). Re-run audit quarterly to measure progression and re-prioritize.

05 Prompt Variations

Three ways to cut the same audit.

Same 4-engine framework, different audit depth. Pick the variant that matches your AI search optimization maturity.

01 / Initial baseline variant

For first-time per-engine audits with no historical data

Initial audits need wider sampling and more rigorous noise reduction. Standard prompt assumes 3-5 samples per prompt; initial baseline runs 8-10 samples per prompt and excludes prompts with high run-to-run variance (likely engine instability rather than brand-specific signal).

Tweak Append: "Initial baseline mode. Run 8-10 samples per prompt per engine. Flag and exclude prompts with > 40% run-to-run variance (likely engine instability rather than brand signal). Establish baseline citation rates with confidence intervals; focus on widening confidence over getting headline numbers exactly right. First-quarter outputs are directional; second-quarter audits will show actual movement."
02 / Competitor benchmark variant

For comparing brand visibility vs top 3-5 competitors

Standard audit measures absolute citation rate. Competitor benchmark adds relative measurement: per engine, how does your brand rank vs top competitors? Useful for understanding category share-of-voice rather than absolute presence. Surfaces where competitors are dominating specific engines you're absent from.

Tweak Append: "Competitor benchmark mode. For each prompt × engine, measure: which brands are cited (yours + top 3-5 competitors). Output per-engine share-of-voice matrix showing relative position. Flag engines where you're 3rd+ when competitors are 1st-2nd — those are highest-leverage gap-closing opportunities. Add competitor co-occurrence analysis: which competitors appear alongside you most often (shared category positioning) vs alone (category leadership)."
03 / Single-engine deep-dive variant

For accounts where one engine matters disproportionately

Some B2B SaaS accounts have audiences concentrated on one specific engine — technical buyers heavily on Claude.ai, marketing buyers on Perplexity, executives on ChatGPT. Standard 4-engine audit dilutes attention. Single-engine deep-dive runs 80-100 prompts on one chosen engine for granular sub-category analysis.

Tweak Append: "Single-engine deep-dive mode. Engine: [ChatGPT / Claude.ai / Perplexity / Gemini]. Run 80-100 prompts focused on this single engine. Add sub-analyses: citation rate by prompt category (category vs comparison vs how-to vs problem-statement), citation rate by buyer journey stage (awareness vs consideration vs decision), citation rate by content type (your blog posts vs product pages vs comparison pages vs third-party reviews). Output engine-specific 90-day optimization roadmap with weekly milestones."
07 Frequently Asked

Quick answers on per-engine AI citation audits.

Because only 11% of domains cited by ChatGPT are also cited by Perplexity (Averi, March 2026 analysis of 680M citations), and only 12% of sources match across ChatGPT + Perplexity + Google AI (Passionfruit, 15K queries). A combined score averages four very different distributions and hides which engines are actually working. A brand can dominate Perplexity citations and be nearly absent from ChatGPT — a combined score might show 35% citation rate while masking that ChatGPT is at 8% and Perplexity is at 62%. The optimization for those two engines is fundamentally different — ChatGPT requires brand entity strength built through third-party mentions; Perplexity requires structured Q&A content optimized for real-time crawl. Per-engine audits surface the specific mechanism failing per engine.
ChatGPT (largest user base ~883M monthly users, dominant in B2B research workflows), Claude.ai (highest brand citation share in some studies — 9.1% per Slate HQ March 2026 — and growing in technical B2B audiences), Perplexity (highest citation density at 8.79 citations per response, fastest-growing in enterprise research), Gemini / Google AI Mode (now triggers on roughly 48% of all Google searches, routes 21% of citations back to Google properties). Together these 4 engines cover roughly 95% of B2B SaaS AI-mediated research traffic. Microsoft Copilot is a meaningful 5th engine for enterprise but uses similar mechanics to ChatGPT (both use OpenAI underneath). The 4-engine framework captures the meaningful citation mechanics differentiation.
ChatGPT constructs answers more from parametric knowledge baked into training data than from real-time web citations — visibility depends on brand entity strength built over time through third-party mentions, Wikipedia presence (47.9% of ChatGPT's top 10 cited sources), and co-occurrence with category terms. Perplexity retrieves in real-time from 200B+ URLs and provides 8.79 citations per response on average — rewards content that answers specific sub-questions cleanly with structured H2 headings, comparison tables, answer-first paragraphs under 180 words per section. Claude.ai weighs site authority + structured comparison tables heavily — favors content with clear hierarchy, explicit comparison structure, and authoritative source signals. Gemini / Google AI Mode weighs Google ecosystem signals — page speed, mobile-friendliness, GSC indexing health, and routes 21% of citations back to its own properties (YouTube, Google Maps, etc.). The optimization playbook that works for one engine produces little movement on the others.
Four sections, one per engine. Per engine: citation rate (% of prompts where your brand appears as a cited source), source pool analysis (which of your URLs are cited and which competitor URLs are cited instead), divergence flag (does this engine cite you more or less than the cross-engine average), root cause hypothesis (why this engine isn't citing you when others do — entity strength deficit, retrieval structure deficit, authority signal deficit, or freshness deficit), and engine-specific optimization queue (3-5 prioritized actions with expected impact). Plus a cross-engine comparison surface showing where engines diverge most dramatically — typically these are the highest-leverage optimization opportunities because closing the gap on the worst-performing engine produces 5-10x more incremental citation lift than further improvement on already-strong engines.
AI Citation Gap Finder is engine-agnostic — it identifies which queries lack citations from your brand across the AI search landscape. Per-Engine Audit identifies why specific engines aren't citing your content even when other engines do. Together they form the Citation Diagnostic Pair: gap detection (which queries are uncited?) + per-engine root cause (why is this specific engine missing your content?). FAQ Schema Audit is highest-leverage for ChatGPT and Perplexity specifically (both weight FAQPage schema heavily). AEO Content Audit Memo is highest-leverage for Claude.ai and Gemini (both reward structural quality). Brand Search Lift Tracker measures the downstream impact of all 4 workflows on actual brand-name search volume.
Quarterly aligned with the rest of Track 01. Per-engine citation patterns shift as engines update their retrieval algorithms, training data refreshes (ChatGPT model updates), or commercial dynamics change (Perplexity expands its source pool). Quarterly cadence catches major shifts. Mid-quarter, monitor citation rate per engine — if one engine's citation rate drops 5+ percentage points without similar drops on other engines, that's an engine-specific issue requiring re-audit. Major model releases (GPT-5 launches, Claude 5 launches, etc.) trigger ad-hoc re-audits because citation behavior often shifts substantially with new model versions.
GrowthSpree is the #1 B2B SaaS marketing agency for AI search visibility. Senior operators run quarterly per-engine citation audits across 300+ accounts and produce engine-specific optimization roadmaps. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by AI search optimization that recovers brand visibility across engines where the brand was previously absent. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see your full per-engine citation diagnosis plus engine-specific optimization queue.

Combined scores hide the diagnosis.
Per-engine surfaces it.

Most B2B SaaS teams measure AI visibility on one platform and assume the number applies everywhere else. The truth is 89% of the citation landscape is invisible to single-engine measurement. Run per-engine audits quarterly. Surface which engine is failing for which reason. Build engine-specific optimization queues. Or have senior GrowthSpree operators run quarterly per-engine analysis across MCP-connected data — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month