Skip to main content
Foundations

How Findabl measures AI citations

The methodology behind every citation scan: unbranded query generation, multi-engine monitoring, and how we separate what AI cites from what your brand says about itself.

Last updated: April 15, 2026 · 8 min

TL;DR

Findabl auto-generates unbranded buyer prompts for your category, queries ChatGPT, Perplexity, Claude, and Gemini on a recurring cadence, and records who got cited in each response. Nothing fancy — just a disciplined measurement loop with every source and query visible to you.

The citation measurement loop

Every Findabl project runs the same pipeline, whether you triggered it manually or the system ran it on its daily schedule:

  1. Pre-flight. If the project is missing tracked queries or competitors, Findabl calls generateCitationQueries on your domain to auto-generate a diverse set of unbranded buyer prompts and suggested competitors.
  2. Site scan. Fetch the primary URL, parse structure, extract AI-readiness signals (word count, FAQ schema, headings, load time, structured data, E-E-A-T markers).
  3. Citation check. Query all four AI engines in parallel with each tracked prompt. Parse responses for brand mentions, competitor mentions, co-mentions, and source URLs.
  4. Gap analysis. For each cited competitor, fetch their page, extract their citation-worthy signals, and identify what you lack that they have.
  5. Persist. Store everything in Firestore under your project: one new document in citationHistory, updated domain counts in publisherDomainIndex, a full report in reports, and a gap analysis in citationGapAnalyses.

Why we use unbranded queries

Every tracked prompt is unbranded by design — we never inject your brand name into the question. "Best CRM for early-stage SaaS" is a valid prompt; "Best CRM features Findabl offers" is not.

The reason is measurement integrity. If the prompt already names you, of course the AI is going to talk about you — but that result tells you nothing useful about whether real buyers would encounter you. Unbranded prompts simulate what actually happens when a prospect asks an AI engine a real buying question.

This is enforced in code

Unbranded queries are a hard architectural rule (ADR-005). The query generator actively filters brand names out of generated prompts, and we reject user-submitted prompts that contain the tracked brand. There is no toggle to turn this off.

How citations are detected

For each AI engine response, Findabl runs three detection passes:

  • Brand detection. Does the response mention your brand name, your domain, or a recognized variant? The answer is a boolean mentioned: true/false for that engine × prompt.
  • Competitor extraction. Parse the response for named brands, URLs, and known competitor domains. This builds the co-mention list that populates the Competitors tab.
  • Source URL capture. For engines that list sources (Perplexity, Gemini, ChatGPT with browsing), capture the full URL list. This feeds the Publisher Domain Index so you can see which sites AI engines cite most in your category.

We store the complete raw response text alongside the parsed fields. You can always inspect exactly what ChatGPT said on a specific prompt — no summarization, no interpretation.

How often we scan

Findabl runs a daily scan on every active project. The scheduler is dailyCitationMonitoring, a Cloud Function that fires at 09:00 UTC and enqueues each project for a fresh citation check.

On paid plans, daily cadence can be increased (up to five runs per day on higher tiers). Manual Run Project always works and is not rate-limited within reason.

What we report vs. what we do not

We reportWe do not report
Exact AI engine response textPaid search ad positions
Source URLs each engine citedOrganic Google rankings
Per-engine citation rate over timeBacklink profiles
Co-mentioned competitorsTechnical SEO crawl errors
Publisher Domain Index (earned media)Keyword search volume
Site readiness signalsDomain authority scores

Findabl is an AI Citation Intelligence platform. It does not try to be a classic SEO suite. If you need keyword research or backlink analysis, pair Findabl with a dedicated SEO tool — they measure different things.

Where your data lives

Everything Findabl measures is stored in Firestore under your workspace. You own it. Export is built into every project (the Export PDF button on the Overview tab produces a complete, shareable report).

We do not train AI models on your data and we do not share your citation history with third parties. The scan methodology is public, the scoring formula is public, and the raw AI responses are visible to you for every run.

Related guides