Public Transparency

Analytics & Benchmark

Live stats from our database. Head-to-head comparison against every major fact-check bot on X. No cherry-picking — these are real numbers.

Our Numbers (Live)

Claims Checked
1,874
Tweets Analyzed
1,714
Avg. Sources Cited
3.3

per check

Response Rate
90.4%

of mentions answered

Articles Analyzed
9,216
News Sources Tracked
645

+ 5,000 bias-rated via CSV

Media Owners Mapped
72

with FEC donation data

Author Profiles
241

political lean tracked

Verified
544

29% of all checks

Disputed
136

7.3% of all checks

Mixed
614

32.8% of all checks

Unverified
419

22.4% of all checks

Benchmark vs. Competitors

Feature-by-feature comparison against every major fact-check bot on X. Competitor data from published benchmarks, Tow Center (Columbia), and public testing.

FeatureTrue SourceGrokGPTZeroArAIstotleCommunity Notes
Claim-level breakdownPartial
Cites real sourcesFabricatesAcademic only
Response time<60 secInstantRate-limited~30 sec14+ hours
Proactive scanning
Source ownership + FEC donations
Poster political profiling
Image/chart analysis
Poster accuracy history
Self-learning (gets smarter)
Works without being taggedN/A

Accuracy Metrics

TrueSourceBot@TrueSourceBot
US
Citation error rateTBD
Benchmark scoreTBD
Analysis categories8
Avg. sources per check3.3
Avg. atomic claims4.2
Response rate90.4%
ArAIstotle@ArAIstotle
Citation error rate14%
Benchmark score8.55/10
Analysis categories1

Self-published benchmark (Facticity AI). TIME Best Inventions 2024. Mention-triggered, not proactive.

Grok@grok
Citation error rate94%
Benchmark scoreNot tested
Analysis categories0

94% citation error rate (Columbia Tow Center). Disqualified from comparison tests — 0 valid responses. User-initiated only (button or @mention).

GPTZero@GPTZeroAI
Citation error rateNot tested
Benchmark scoreNot tested
Analysis categories0

Source-finder approach — surfaces evidence, no verdicts. 220M+ scholarly articles.

Perplexity@AskPerplexity
Citation error rate42%
Benchmark score5.91/10
Analysis categories0

AI search engine, not a dedicated fact-checker. 37% of queries answered incorrectly (The Quint).

Benchmark Sources

Citation error rates: Columbia Tow Center for Digital Journalism study — tested 200 queries across 8 AI platforms. ArAIstotle: 14%, Perplexity: 42%, Grok: 94%.

Benchmark scores: Facticity AI comparison — 45 real tweets, 3 LLM judges (Gemini 2.5 Pro, Claude 3.7 Sonnet, GPT-4o), 180 blinded evaluations. ArAIstotle: 8.55/10, Perplexity: 5.91/10. Note: published by ArAIstotle's own team.

TrueSourceBot metrics: Live data from our production database. Updated every 5 minutes. "TBD" metrics require an independent third-party audit — we will not self-certify.

What We Analyze (8 Categories)

Every fact-check runs through all 8 analysis dimensions. No other bot does this.

1
Claim Verification
Verdict (true/false/misleading/unverifiable) with evidence-backed reasoning and source links.
2
Missing Context
Omitted facts that materially change the interpretation of the claim.
3
Bias & Narrative Framing
Political bias detection, narrative framing techniques, and emotional language flags.
4
Source Credibility
Assessment of the original poster, prior misinformation history, and topic-specific accuracy.
5
Quote Context Integrity
Detects selective quoting — what was cut, and whether meaning was altered.
6
Original Source Tracing
Finds the first known appearance and type of the information source.
7
Market / Price Sensitivity
Flags claims that could move markets, and whether the info is already public.
8
Topic Classification
Domain categorization and contextual framework for understanding the claim.

Monthly Volume

Feb 2026
1160
Mar 2026
714

All Fact-Checks

Showing 0 of 1,874 claims

How We Verify

Every tweet or claim submitted to TrueSource goes through our dialectical verification pipeline:

  1. Image content extraction — OCR via Claude Haiku vision extracts text, statistics, and claims from screenshots, infographics, and article images attached to tweets
  2. Atomic claim extraction — break complex statements into individual, verifiable claims
  3. Multi-source evidence search — cross-reference each claim against news archives, government data, and fact-check databases
  4. Dialectical analysis — search for evidence FOR and AGAINST each claim, weight by source reliability (MBFC data)
  5. Transparent verdicts — every verdict includes the evidence and reasoning, linked from the tweet reply

We publish these stats openly because accountability is the foundation of trust. If we ask you to question your sources, you should be able to question ours. Metrics marked "TBD" require independent third-party testing — we refuse to self-certify accuracy numbers.