Features How It Works Products Pricing Compare Use Cases About Contact
Sign In Get Started Free
Free Hallucination Detector

Spot AI Hallucinations Before You Publish

Paste any AI-generated text — we flag fake citations, invented statistics, and made-up sources. Catch hallucinations in 30 seconds, free.

0 words
What is it

What is an AI Hallucination?

"Hallucination" is the polite term for when AI models make things up. ChatGPT cites a research paper that doesn't exist. Claude attributes a quote to the wrong author. Gemini invents a statistic that sounds plausible but has no source. The text reads confident and authoritative — and it's wrong.

Studies suggest 15-30% of AI-generated content contains at least one factual hallucination. For students, journalists, marketers, and researchers, publishing one of these is professionally costly.

TextSight's Hallucination Detector reads AI drafts the way a careful editor would: extract every factual claim, every citation, every statistic, every name — then flag the ones that look fabricated. Not perfect, but it catches the obvious ones.

FAQ

Common questions

Is the hallucination detector free?

Yes. 3 checks per day free with no signup. Sign up free for 20/day. Caps are tighter than the lighter tools because each check involves extracting claims and reasoning over each one.

How does it detect hallucinations?

The tool tokenizes your text into discrete factual claims (citations, statistics, dates, names, quotes), then evaluates each one against Claude Sonnet 4's training knowledge plus pattern matching for fabricated-citation signatures.

What KINDS of hallucinations does it catch?

Fake academic citations (papers that don't exist), invented historical events, made-up statistics ("87% of marketers report…" when no such study exists), mis-attributed quotes, non-existent URLs, wrong dates or contested facts presented as definite.

What WON'T it catch?

Opinions presented as fact (not technically a hallucination), very recent events past Claude's knowledge cutoff (without web search, can't verify), domain-specific minutiae (rare technical claims), numerical claims with no specific source ("around 50%").

How is this different from the Fact-Checker?

A fact-checker verifies claims against the live web. The Hallucination Detector specifically looks for the SIGNATURES of AI fabrication — fake citations, plausible-but-non-existent details. Best to use both: hallucination check first (cheap, fast), web fact-check second.

Will my text be saved?

No. Sent to our LLM API for the single check, then deleted. Not stored, not used for training.

Got an AI draft? Run it through Detector + Hallucination Detector

3 free Detector scans/day. Confirm your draft reads human AND has no fake citations.