Troubleshooting · Chat Issues

The answer doesn't match my source

When Ritsu confidently says something wrong, here's why and how to fix it. Hallucinations explained.

3 min read

If Ritsu's answer contradicts your textbook, paper, or video — the source is right and Ritsu is wrong. Always. Here's why it happens and what to do.

Why this happens

Ritsu uses large language models (LLMs). LLMs are great at language, less great at facts. They sometimes "hallucinate" — produce confident-sounding answers that aren't grounded in the source. Common causes:

Cause 1: The source wasn't fully ingested

If your source upload had OCR errors, missed sections, or skipped tables — Ritsu sees a different version of the content than you do.

Fix: open the source's processed-text view (Sources → click the source → View extracted text). Search for the disputed term. If it's missing or garbled, reprocess the source or re-upload a cleaner copy.

Cause 2: The model fell back to general knowledge

Even with a source attached, models sometimes lean on their pre-training data instead of the specific source. More common with edge-case topics, niche terminology, or sources where the relevant section is short.

Fix: try /explain "<exact term>" using only the source — the explicit "using only the source" hint nudges the model to ground harder. Or use /search "<exact term>" first to confirm the source actually contains what you think it does.

Cause 3: You're at the edge of the context window

Long PoKs or sources sometimes exceed the model's context window — Ritsu has to truncate. Truncation drops the parts the model decides are least relevant, which sometimes drops the bit you actually wanted.

Fix: split the PoK into smaller ones — chapter-by-chapter is usually safest.

Cause 4: The model just got it wrong

Models are imperfect. Even with a small source and unambiguous content, an answer can be wrong. Especially:

  • Math computations
  • Numerical facts (dates, statistics)
  • Anything requiring multi-step deduction

Fix: don't trust math from /explain — verify in your source. Use /quiz instead — it cites specific source passages with each answer, easier to verify.

How to verify any Ritsu answer

Every response in chat has a Sources chip below it. Click to see the exact source passages the answer drew from. If the chip is empty or thin, the model didn't ground well — treat the answer as suspicious.

For longer-form answers (/explain, /summarize):

  • Highlight a specific claim
  • Right-click → Verify in source
  • Ritsu opens the source at the matching passage. Compare directly.

What to do when you find a wrong answer

  1. Save it for yourself. Don't trust the wrong info just because it was confident.
  2. Report it. Click the 👎 Wrong button on the response. Tell us what the source actually says. We use these reports to improve the model + tune retrieval.
  3. Workaround for the moment. Ask a more specific question, scope tighter (/explain "the specific equation on page 47"), or fall back to /quiz which forces source citations.

When to trust Ritsu

  • Definitions and conceptual explanations grounded in clearly-extracted sources: usually right
  • Quiz answers with explicit source citations: usually right
  • Multi-step math or quantitative deductions: verify
  • Edge-case factual claims (dates, names, statistics): verify

The pattern: the more specific and citation-heavy the response, the more trustworthy. The more sweeping and unsourced, the more skeptical to be.

A philosophy note

We don't pretend this is a solved problem. AI hallucinations are an active area of research, and Ritsu's job is to be useful even when the underlying tech is imperfect. We work hard on the retrieval + grounding side to minimise this. Reporting wrong answers genuinely helps.

Was this article helpful?