Why a blended score, not a single metric?
The GEO Score answers one question: "On a scale of 0–100, how ready is my brand to be cited by AI engines?" A single-metric score would be misleading in either direction:
- Citation-only would ignore your site. A brand can get lucky and be cited despite a terrible page — that is not a stable position.
- Site-readiness-only would ignore outcomes. A perfect page that no AI engine cites is not winning anything.
Blending forces honesty on both sides. You need AI to actually cite you (70%) and you need a page that deserves to be cited (30%).
The formula
GEO Score = round( citationRate% × 0.7 + siteReadiness% × 0.3 ). If no site scan exists yet, citation rate counts for 100% until the first scan completes.
This formula is documented in ADR-005 and applied identically in every view (Overview, Dashboard, PDF export). You will never see a different number on a different page.
What moves citation rate
Citation rate is the share of engine × prompt combinations that cited your brand in the most recent scan. If Findabl queries 4 engines on 10 prompts (40 combinations) and you were cited in 12 of them, your citation rate is 30%.
- Up: earned media on trusted publishers, more tracked prompts you rank on, model updates that favor your content type.
- Down: competitors earning new placements, Reddit/forum threads that shift engine consensus, model updates that deprecate your sources.
What moves site readiness
Site readiness is a 0–100 normalization of the geoBreakdown signals Findabl extracts on the site scan. Each signal contributes points up to a maximum of ~130 (we normalize down to /100). Signals include:
- FAQ schema / structured data present
- Question-style H2 / H3 headings
- Word count sufficient for the query type
- Load time under a threshold
- Author bios, bylines, or other E-E-A-T markers
- Internal linking depth
- Freshness indicators (last-updated dates, publication dates)
The full breakdown is in the "What we check" disclosure on the Overview tab. Every signal ties back to a documented, testable extraction rule.
Common misreads
GEO Score is a readiness number. Your competitor could have a lower score and still outrank you on citations because they earned a placement on a publisher AI engines trust. Use GEO Score for trend tracking; use the Competitors tab for head-to-head.
Citation rate is sample-based — a single prompt going from "cited" to "not cited" shifts your score. Look at the trend sparkline (last 10 scans) before reacting to a one-scan move.
The honest caveats
We publish the formula publicly so you can sanity-check it. That also means you can see the limits: the score does not account for citation quality (being cited by Wikipedia vs a low-quality blog counts the same in the rate), sentiment of the citation, or whether the citation drives actual traffic. Those are real omissions. We would rather be transparent about them than inflate the score with guesses.