What "gap" means in this context
A citation gap is a structural or content feature present on a competitor page that AI engines cited — but missing on your page. If every cited competitor page has FAQ schema and yours does not, that is a gap. If every cited competitor has a "Last updated" date within the last 6 months and yours is 3 years old, that is a gap.
Gap analysis is not an LLM guessing at recommendations. It is a direct structural comparison of actual pages, with every action tied to an observed data point and a verification test per ADR-003.
What we extract
For each competitor page, Findabl fetches the HTML and extracts a fixed set of signals:
| Signal category | What we measure |
|---|---|
| Source type | Earned media, review site, directory, encyclopedia, forum, brand-owned, social, unknown |
| Schema markup | FAQ, HowTo, Article, Product, Organization, Review schemas present |
| Heading structure | H2/H3 phrased as questions, keyword match against tracked prompts |
| Content age | Publication date, last-updated date, relative freshness |
| E-E-A-T markers | Author bio, bylines, links to expert profiles, external citations |
| Content depth | Word count, outbound reference count, internal linking density |
We run the same extraction against your URL and then compare. The gaps are the signals your competitors have that you lack.
How actions are prioritized
Not every gap is worth fixing. The output prioritizes actions by likely impact:
- Critical: gaps present on 4+ of 5 cited competitors. If almost every cited page has this feature and yours does not, the feature is likely a prerequisite for citation in this category.
- High: gaps present on 3 of 5 cited competitors, or single-competitor gaps of a known high-signal type (e.g. FAQ schema).
- Medium: single-competitor gaps of moderate signal strength.
- Low: stylistic differences that rarely move the needle.
Each action carries a dataPoint (the observed fact, e.g. "4 of 5 cited competitors have FAQ schema; your page does not") and a verificationTest (how to prove the fix worked, e.g. "Add FAQPage schema; re-run Project in 7 days; confirm readiness score increases by >3 points").
When gap analysis runs
Gap analysis now runs as part of every Run Project — awaited, not fire-and-forget. When the Run Project spinner stops, gap analysis is complete and the result is cached locally. On the Overview tab, the Citation Gap Analysis panel renders instantly from the cached result.
If the citation scan did not surface any co-mentioned competitors AND you have not declared any in project settings, gap analysis has nothing to compare against and is skipped. The panel shows a placeholder prompting you to declare competitors in settings.
Acting on the results
The recommended workflow:
- Focus on Critical actions first — those are the closest to required features.
- Implement fixes in priority order. Most are small (add schema, rephrase a heading, update a date) and ship in a day.
- Re-run Project. Verify the action's
verificationTestpasses (readiness score moves, signal flips from missing to present). - Wait 2-4 weeks. AI engines need time to re-crawl and re-evaluate. Citation rate changes lag structural changes.
- Compare the new gap analysis against the previous one. Closed gaps will drop off; any new gaps (competitors shipped something recent) will appear.