Public Transparency

Analytics & Benchmark

Live stats from our database. Head-to-head comparison against every major fact-check bot on X. No cherry-picking — these are real numbers.

Our Numbers (Live)

Claims Checked
10,298
Tweets Analyzed
9,106
Avg. Sources Cited
3.6

per check

Response Rate
93.4%

of mentions answered

Articles Analyzed
18,317
News Sources Tracked
877

+ 5,000 bias-rated via CSV

Media Owners Mapped
72

with FEC donation data

Author Profiles
1,372

political lean tracked

Verified
2,478

24.1% of all checks

Disputed
780

7.6% of all checks

Mixed
2,020

19.6% of all checks

Unverified
3,018

29.3% of all checks

Benchmark vs. Competitors

Feature-by-feature comparison against every major fact-check bot on X. Competitor data from published benchmarks, Tow Center (Columbia), and public testing.

FeatureTrue SourceGrokGPTZeroArAIstotleCommunity Notes
Claim-level breakdownPartial
Cites real sourcesFabricatesAcademic only
Response time<60 secInstantRate-limited~30 sec14+ hours
Proactive scanning
Source ownership + FEC donations
Poster political profiling
Image/chart analysis
Poster accuracy history
Self-learning (gets smarter)
Works without being taggedN/A

Accuracy Metrics

TrueSourceBot@TrueSourceBot
US
Citation error rateTBD
Benchmark scoreTBD
Analysis categories8
Avg. sources per check3.6
Avg. atomic claims4
Response rate93.4%
ArAIstotle@ArAIstotle
Citation error rate14%
Benchmark score8.55/10
Analysis categories1

Self-published benchmark (Facticity AI). TIME Best Inventions 2024. Mention-triggered, not proactive.

Grok@grok
Citation error rate94%
Benchmark scoreNot tested
Analysis categories0

94% citation error rate (Columbia Tow Center). Disqualified from comparison tests — 0 valid responses. User-initiated only (button or @mention).

GPTZero@GPTZeroAI
Citation error rateNot tested
Benchmark scoreNot tested
Analysis categories0

Source-finder approach — surfaces evidence, no verdicts. 220M+ scholarly articles.

Perplexity@AskPerplexity
Citation error rate42%
Benchmark score5.91/10
Analysis categories0

AI search engine, not a dedicated fact-checker. 37% of queries answered incorrectly (The Quint).

Benchmark Sources

Citation error rates: Columbia Tow Center for Digital Journalism study — tested 200 queries across 8 AI platforms. ArAIstotle: 14%, Perplexity: 42%, Grok: 94%.

Benchmark scores: Facticity AI comparison — 45 real tweets, 3 LLM judges (Gemini 2.5 Pro, Claude 3.7 Sonnet, GPT-4o), 180 blinded evaluations. ArAIstotle: 8.55/10, Perplexity: 5.91/10. Note: published by ArAIstotle's own team.

TrueSourceBot metrics: Live data from our production database. Updated every 5 minutes. "TBD" metrics require an independent third-party audit — we will not self-certify.

What We Analyze (8 Categories)

Every fact-check runs through all 8 analysis dimensions. No other bot does this.

1
Claim Verification
Verdict (true/false/misleading/unverifiable) with evidence-backed reasoning and source links.
2
Missing Context
Omitted facts that materially change the interpretation of the claim.
3
Bias & Narrative Framing
Political bias detection, narrative framing techniques, and emotional language flags.
4
Source Credibility
Assessment of the original poster, prior misinformation history, and topic-specific accuracy.
5
Quote Context Integrity
Detects selective quoting — what was cut, and whether meaning was altered.
6
Original Source Tracing
Finds the first known appearance and type of the information source.
7
Market / Price Sensitivity
Flags claims that could move markets, and whether the info is already public.
8
Topic Classification
Domain categorization and contextual framework for understanding the claim.

Monthly Volume

Feb 2026
1160
Mar 2026
8123
Apr 2026
1008
May 2026
7

All Fact-Checks

Showing 0 of 10,298 claims

How We Verify

Every tweet or claim goes through a multi-stage verification process. We break down statements into individual checkable facts, search for evidence on both sides, and weigh source reliability before reaching a verdict.

Our system analyzes images, reads full conversation threads, cross-references multiple independent sources, and profiles the poster's track record. Every verdict links to its evidence so you can verify our work.

We publish these stats openly because accountability is the foundation of trust. If we ask you to question your sources, you should be able to question ours. Metrics marked "TBD" require independent third-party testing — we refuse to self-certify accuracy numbers.