How Citedon predicts which pages AI engines cite.
LLM search is replacing Google for a fast-growing slice of buyer queries. Existing SEO tools tell you what AI engines cited yesterday. Citedon tells you what they'd cite if you shipped a specific change, and ranks the changes by how much each one moves the panel.
The four-stage pipeline
Ground truth
Query ChatGPT, Perplexity, Claude, and Gemini directly with the inferred buyer query. Capture the URLs each engine actually cited.
Sim panel
Spawn 20 LLM agents, each primed with a distinct persona: skeptical researcher, conversational user, journalistic, etc. Each agent independently ranks which candidate URLs it would cite for the query. The aggregate distribution approximates the answer space the real engines explore.
Correlation lock
Spearman rank correlation (ρ) between sim-panel ranks and each real-engine ranks. Reported per engine and as a consensus number. Tells you in a single 0-1 figure how trustworthy this scan's prediction is.
Pattern extraction
Claude Sonnet reads the top-cited winner pages alongside the customer's pages. Returns ranked content edits, winner patterns, and structural gaps, referencing specific elements observed, not generic SEO advice.
Why twenty agents.
At ~$0.03 per agent call, twenty draws clear $0.60 per scan, leaving us margin against the $19 unlock. Forty agents halves the margin without meaningfully improving the signal. Ten leaves the panel feeling thin.
For a binary cited / not-cited outcome, twenty draws gives a ±22% margin of error at 95% confidence. Below ten the noise eats the signal. Above thirty you're paying for precision the buyer can't act on. Twenty is the sweet spot.
Twenty agents reads as substantial in the report. Concrete enough to trust, small enough to fit on one screen. Five reads as a toy. A hundred reads as LLM theater nobody will dig into.
Validation, by buyer type
Validation across 47 paid scans during private beta. The methodology spec, including the exact queries, agent personas, and engine sampling protocol, is published openly. The number we don't ship is a "confidence score" computed from features unrelated to citation behavior. We report ρ against actual engine output, full stop.
Read the methodology spec →Where existing tools differ.
| Profound · Athena · Daydream | Citedon | |
|---|---|---|
| What it measures | Citations today | Citations after you ship a change |
| Method | Engine polling | Engine polling + 20-agent sim + LLM pattern extraction |
| Output | Score | Ranked edits + winner patterns + gap analysis ranked by predicted lift |
| Trust signal | "Our score" | Spearman ρ against real engine ranks |
Citedon predicts. Comparable tools report. Both are useful. Only one tells you what to do next.
What's in a paid report
Verdict number
A single 0-4 figure with the verdict line. Top of the report.
12 ranked edits
Each edit with a predicted-lift bar. Sorted descending. Specific, not generic.
Winners + gaps
Eight patterns winning pages share. Eight gaps your URL has. Side by side.
Citation audit
Your URLs, sim-citation %, and the real engines each one already appears in.
Sim-panel top picks
The ten URLs the panel ranked highest, regardless of who owns them.
Per-engine raw citations
Each engine's full cited URL list, expandable. The receipts.
Pricing, briefly
Try the free audit first.
No signup. No card. 90 seconds.