Skip to main content
Product

How Citation Gap Analysis works

Gap analysis fetches the competitor pages AI engines actually cite, extracts the structural signals that made them citable, and tells you exactly what you need to add.

Last updated: April 15, 2026 · 5 min

TL;DR

Gap analysis answers one question: "What do my cited competitors have on their pages that I do not?" Findabl fetches up to 5 competitor pages AI engines have cited for your tracked prompts, extracts structural signals (schema, heading patterns, source type, age), and returns prioritized actions tied to those observations. No LLM speculation — only observed data.

What "gap" means in this context

A citation gap is a structural or content feature present on a competitor page that AI engines cited — but missing on your page. If every cited competitor page has FAQ schema and yours does not, that is a gap. If every cited competitor has a "Last updated" date within the last 6 months and yours is 3 years old, that is a gap.

Gap analysis is not an LLM guessing at recommendations. It is a direct structural comparison of actual pages, with every action tied to an observed data point and a verification test per ADR-003.

What we extract

For each competitor page, Findabl fetches the HTML and extracts a fixed set of signals:

Signal categoryWhat we measure
Source typeEarned media, review site, directory, encyclopedia, forum, brand-owned, social, unknown
Schema markupFAQ, HowTo, Article, Product, Organization, Review schemas present
Heading structureH2/H3 phrased as questions, keyword match against tracked prompts
Content agePublication date, last-updated date, relative freshness
E-E-A-T markersAuthor bio, bylines, links to expert profiles, external citations
Content depthWord count, outbound reference count, internal linking density

We run the same extraction against your URL and then compare. The gaps are the signals your competitors have that you lack.

How actions are prioritized

Not every gap is worth fixing. The output prioritizes actions by likely impact:

  • Critical: gaps present on 4+ of 5 cited competitors. If almost every cited page has this feature and yours does not, the feature is likely a prerequisite for citation in this category.
  • High: gaps present on 3 of 5 cited competitors, or single-competitor gaps of a known high-signal type (e.g. FAQ schema).
  • Medium: single-competitor gaps of moderate signal strength.
  • Low: stylistic differences that rarely move the needle.

Each action carries a dataPoint (the observed fact, e.g. "4 of 5 cited competitors have FAQ schema; your page does not") and a verificationTest (how to prove the fix worked, e.g. "Add FAQPage schema; re-run Project in 7 days; confirm readiness score increases by >3 points").

When gap analysis runs

Gap analysis now runs as part of every Run Project — awaited, not fire-and-forget. When the Run Project spinner stops, gap analysis is complete and the result is cached locally. On the Overview tab, the Citation Gap Analysis panel renders instantly from the cached result.

What if no competitors were found?

If the citation scan did not surface any co-mentioned competitors AND you have not declared any in project settings, gap analysis has nothing to compare against and is skipped. The panel shows a placeholder prompting you to declare competitors in settings.

Acting on the results

The recommended workflow:

  1. Focus on Critical actions first — those are the closest to required features.
  2. Implement fixes in priority order. Most are small (add schema, rephrase a heading, update a date) and ship in a day.
  3. Re-run Project. Verify the action's verificationTest passes (readiness score moves, signal flips from missing to present).
  4. Wait 2-4 weeks. AI engines need time to re-crawl and re-evaluate. Citation rate changes lag structural changes.
  5. Compare the new gap analysis against the previous one. Closed gaps will drop off; any new gaps (competitors shipped something recent) will appear.

Related guides