The Method

How Citedon predicts which pages AI engines cite.

LLM search is replacing Google for a fast-growing slice of buyer queries. Existing SEO tools tell you what AI engines cited yesterday. Citedon tells you what they'd cite if you shipped a specific change, and ranks the changes by how much each one moves the panel.

4engines
ChatGPT · Perplexity · Claude · Gemini
20agents
per scan · persona-distinct sim panel
0.71ρ avg
Spearman vs. real engine output
How a scan flows
Inputhttps://nerdwallet.com/best/credit-cards"best credit cards 2026"
query · inferred from page signals
01Live engines · parallel queries
4 × SSE
ChatGPT
Perplexity
Claude
Gemini
→ cited URL sets, ranks 1..N
02Sim panel · 20 persona-distinct agents
20 × CLAUDE HAIKU
→ 20 ranked distributions vs. real engine ranks
03Correlation lock · Spearman ρ
ρ ≥ 0.50 BASELINE
avg
ρ0.71
→ trusted sim distribution · proceed if ρ ≥ 0.50
04Pattern extraction · Claude Sonnet
1 × LONG-CONTEXT CALL
Inputs
Top winner pages + customer pages
Outputs
12 edits · 8 winners · 8 gaps
Final reportranked recommendations, predicted lift per edit, per-engine raw citations

The four-stage pipeline

01

Ground truth

Query ChatGPT, Perplexity, Claude, and Gemini directly with the inferred buyer query. Capture the URLs each engine actually cited.

REAL ENGINE OUTPUT · NOT A SCORE
02

Sim panel

Spawn 20 LLM agents, each primed with a distinct persona: skeptical researcher, conversational user, journalistic, etc. Each agent independently ranks which candidate URLs it would cite for the query. The aggregate distribution approximates the answer space the real engines explore.

20 × CLAUDE HAIKU · ~$0.60/SCAN
03

Correlation lock

Spearman rank correlation (ρ) between sim-panel ranks and each real-engine ranks. Reported per engine and as a consensus number. Tells you in a single 0-1 figure how trustworthy this scan's prediction is.

ρ ≥ 0.75 HIGH CONFIDENCE · 0.50 BASELINE
04

Pattern extraction

Claude Sonnet reads the top-cited winner pages alongside the customer's pages. Returns ranked content edits, winner patterns, and structural gaps, referencing specific elements observed, not generic SEO advice.

CLAUDE SONNET · 12 EDITS · 8 WINNERS · 8 GAPS

Why twenty agents.

COST

At ~$0.03 per agent call, twenty draws clear $0.60 per scan, leaving us margin against the $19 unlock. Forty agents halves the margin without meaningfully improving the signal. Ten leaves the panel feeling thin.

STATISTICS

For a binary cited / not-cited outcome, twenty draws gives a ±22% margin of error at 95% confidence. Below ten the noise eats the signal. Above thirty you're paying for precision the buyer can't act on. Twenty is the sweet spot.

READABILITY

Twenty agents reads as substantial in the report. Concrete enough to trust, small enough to fit on one screen. Five reads as a toy. A hundred reads as LLM theater nobody will dig into.

Validation, by buyer type

Publisher
NerdWallet, Bankrate-style
ρ0.78
SaaS
Attio, Linear-style
ρ0.71
Affiliate
BestMoney, Forbes Advisor-style
ρ0.63
0.50 ≈ baseline0.75 ≈ high confidence

Validation across 47 paid scans during private beta. The methodology spec, including the exact queries, agent personas, and engine sampling protocol, is published openly. The number we don't ship is a "confidence score" computed from features unrelated to citation behavior. We report ρ against actual engine output, full stop.

Read the methodology spec →

Where existing tools differ.

What it measures
Profound · Athena · Daydream: Citations today
Citedon: Citations after you ship a change
Method
Profound · Athena · Daydream: Engine polling
Citedon: Engine polling + 20-agent sim + LLM pattern extraction
Output
Profound · Athena · Daydream: Score
Citedon: Ranked edits + winner patterns + gap analysis ranked by predicted lift
Trust signal
Profound · Athena · Daydream: "Our score"
Citedon: Spearman ρ against real engine ranks

Citedon predicts. Comparable tools report. Both are useful. Only one tells you what to do next.

What's in a paid report

Verdict number

A single 0-4 figure with the verdict line. Top of the report.

3/ 4

12 ranked edits

Each edit with a predicted-lift bar. Sorted descending. Specific, not generic.

Winners + gaps

Eight patterns winning pages share. Eight gaps your URL has. Side by side.

Citation audit

Your URLs, sim-citation %, and the real engines each one already appears in.

/best/cards0 / 4
/best/cards/travel1 / 4
/best/cards/students2 / 4

Sim-panel top picks

The ten URLs the panel ranked highest, regardless of who owns them.

0194%
0288%
0381%

Per-engine raw citations

Each engine's full cited URL list, expandable. The receipts.

Perplexity8citations

Pricing, briefly

Free competitor audit
Free
See whether AI engines cite any URL today.
Run a free audit
Single report
$19once
One URL, fully audited. Predict before you publish.
Buy a single report
Pro
$99/ month
For solo marketers, founders, consultants shipping content weekly.
Start Pro
Growth
$399/ month
Content teams running 5+ brands with serious scan volume.
Start Growth
See full pricing →

Try the free audit first.

No signup. No card. 90 seconds.

https://