Troubleshooting · Chat Issues
Ritsu is slow or laggy
Most slow-response reports are peak-hour load, big context, or background browser tabs. Quick triage here.
3 min read
If Ritsu feels slow, the cause is usually one of: peak-hour load on the AI provider, a too-large PoK / source context, browser tab pressure, or your network. Here's the triage.
Quick checks
Peak hours
AI provider latency spikes during US/EU office hours (roughly 14:00-22:00 UTC). A response that streams in 8 seconds at 03:00 UTC might take 30 seconds at peak. Not a Ritsu bug — model providers throttle. Workarounds:
- Wait it out — peak passes within hours
- For Pro users: priority routing kicks in, you'll feel less slowdown but still some
Large context
If your PoK is built on a 500-page source and you ask /explain to summarise the whole thing, Ritsu has to feed all of it to the model — which takes time + tokens.
Symptom: first response after switching to a new PoK is slow; subsequent responses are fast.
Workarounds:
- Use smaller, more focused PoKs (a chapter, not the whole book)
- Use
/searchfirst to find relevant sections, then/explainscoped to those - Run
/summarizeonce to get a fast overview, then dive into specific parts
Browser tabs
If you have 30 tabs open, your browser may be context-switching aggressively. Streaming connections compete for CPU.
Workaround: close tabs you're not using. Or use a fresh browser window for Ritsu.
Slow network
Streaming connections are sensitive to network jitter. A 4G connection on a moving train will feel choppy even if total throughput is fine.
Workaround: switch to Wi-Fi if you can. If the issue persists on stable Wi-Fi, see Chat won't load or respond.
Specific symptoms
Streaming starts fast, then slows mid-response
Usually a network jitter issue. The first chunks come through fast (in a buffer), then real-time pace catches up. Refresh and retry on a more stable network.
Always slow regardless of time of day
Could be your network. Run a speed test (fast.com). If <2 Mbps stable, that's the problem.
If your network is fast but Ritsu is consistently slow, email support@ritsu.ai with:
- Your timezone + a few timestamps when it was slow
- Speed-test result
- The PoK size if known (bigger PoK = bigger context)
Quiz / flashcard generation is slow
Generation tasks are heavier than chat — they involve more model passes. Expect 10-30 seconds for a 5-question quiz, 20-60 seconds for a 10-card flashcard deck. If it's much longer than that, retry — sometimes the first generation hits a slow path that doesn't repeat.
Saving a deck is slow
Network upload of the rendered deck. Usually <2 seconds. If it spins forever, refresh and retry. Saved progress isn't lost.
When to upgrade
Pro plan gets:
- Priority model routing (faster on average, especially at peak)
- Higher per-request token limits (bigger context, fewer "context too large" errors)
- Unlimited daily commands
If you're consistently at the Free plan's edges, Pro pays for itself in time saved.
Still slow?
Email support@ritsu.ai with timestamps + a sample slow query. We can correlate to provider-side latency in our metrics.
Was this article helpful?