The Feynman Technique for the AI Era: Teaching AI to Catch What You Don't Understand

The reason the Feynman technique works is not that you're teaching — it's that teaching surfaces the gaps you could not see. Here is how to use AI as a patient, infinitely available interlocutor to run the loop on yourself.

Ritsu Team12 min read
The Feynman Technique for the AI Era: Teaching AI to Catch What You Don't Understand

A physics student walks out of a lecture on electromagnetic induction feeling like she understands Faraday's law. She can restate it. She can identify it on a list. She nods along when the professor works through examples. Two weeks later, in a problem set, she is asked to explain why a bar magnet falling through a copper tube falls slower than the same magnet falling through an identical plastic tube. She freezes. She can recite the law. She cannot apply it to a situation she has not seen before.

The gap between "I understand this" and "I can produce an answer from this" is the most destructive illusion in education. Psychologists call it the illusion of explanatory depth, and it is distinct from simple ignorance — ignorance, at least, you are aware of. The illusion is worse: you feel you understand, you would bet money that you understand, and you only discover you do not when the situation forces you to generate a novel answer. By that point it is usually too late to recover.

Richard Feynman, arguably the finest physics teacher of the twentieth century, had a four-step routine for detecting this illusion before it killed him on an exam. He described it in several lectures and in his autobiography: pick a concept, explain it out loud in simple language as if teaching a child, notice exactly where you stumbled, go back to the source to fix the gap, and repeat. The genius is not in the "teach a child" framing — it is that forced production under a simplicity constraint makes the gaps audible in a way that no amount of re-reading can.

What Feynman could not do, because he lived before the technology existed, was run this loop at arbitrary speed with a partner who never got tired of being the child. We can. This post is about exactly that.

Why "teach to understand" is not a cliché

The research literature is surprisingly converged on this. Michelene Chi's self-explanation effect studies (1989 and onward) showed that students who paused to explain examples to themselves during study outperformed students who just read the same examples, even when total study time was identical. The explaining group did something the reading group did not: they forced themselves to produce an account of why each step followed from the previous one, which meant they hit their own gaps and filled them.

The Kruger-Dunning effect is the uglier, more famous half of the same phenomenon. Kruger and Dunning (1999) demonstrated that people who perform worst on a task also overestimate their performance the most. They simply lack the metacognitive skill to recognize their own errors. The bootstrap problem: to know you are wrong about X, you typically need to understand X at least well enough to catch the error — which is exactly what you lack when you are wrong about X.

The Feynman technique is a workaround for this bootstrap problem. You do not need to know the gap in order to find it. You just need to try producing an explanation — and the structure of the explanation itself, the stumbles and hand-waves and "and then something happens" moments, will surface the gaps automatically. The gaps are audible even to the person making them, as long as they are producing out loud rather than re-reading.

The specific cognitive mechanic

When you read a well-written explanation of a concept, your brain pattern-matches it to existing knowledge, registers "this makes sense," and moves on. Almost no new consolidation happens because nothing had to be produced. When you try to explain the same concept from scratch, your brain has to reconstruct the causal chain — and every missing link in that chain forces a stop. You cannot smooth over a gap you are producing; you can only smooth over a gap you are reading.

This is why Feynman's "as if teaching a child" framing matters specifically. It strips the technical vocabulary that experts use to camouflage gaps. If you can only explain Faraday's law using the phrase "flux through the surface," you do not actually understand Faraday's law; you have memorized a formula and its label. If you can explain it in plain English — why does a moving magnet produce a current? — you do understand it. The child-audience constraint forces the stripping.

The four steps

The four steps, with an AI twist

Feynman's original four steps, updated with the concrete prompt patterns that make them work against a modern AI:

Step 1 — Pick one concept

Narrow enough that you could explain it in a single paragraph if you actually understood it. Not "electromagnetism." Instead: "Why does a changing magnetic field induce a current in a wire?" Or: "Why is the derivative of x2x^2 equal to 2x2x?" Or: "Why does the median of a sorted list require linear time to compute in the naive case?"

Step 2 — Produce an explanation out loud

Write it in a text file or speak it into a voice note. The key constraint is that you cannot consult the source material. Use only what is already in your head. Force the production to be end-to-end — no "I'll come back to this part." Fill every gap with something, even if the something is "I think this happens, though I am not sure why."

Step 3 — Have the AI play the curious skeptic

This is the step that the Feynman-era workflow could not do. Paste your explanation and invoke the interlocutor:

Below is my explanation of {CONCEPT}. Play the role of a sharp, curious student
who takes nothing for granted. For every sentence I write:

1. If something is vague or hand-waved, ask me to be specific.
2. If I use a technical term without defining it, ask me to define it.
3. If I make a claim that has exceptions, edge cases, or hidden assumptions, probe them.
4. If my explanation has a logical gap — a step that does not follow from the previous —
   point it out and ask me to fill it.

Do NOT give me the correct explanation yet. Your job is to find every weakness and ask me
to defend it. Go through the whole thing.

My explanation:

{YOUR EXPLANATION}

The AI will return something like twelve to twenty questions. Some will feel pedantic. That is the point. Each question corresponds to a specific moment where your production faltered and you smoothed over it.

Step 4 — Answer each question, then loop

Go back and try to answer each question. The ones you can answer are not gaps. The ones you cannot are. For each real gap, go to the source material, learn just that specific piece, and re-integrate it into your mental model. Then rewrite the explanation and run Step 3 again.

Typically you converge in two to three iterations. By the end, the explanation is tight and you have closed the gaps — in a way that re-reading cannot produce, because re-reading does not force you to produce.

A real transcript

Here is what Step 3 looks like in practice. A student has just explained electromagnetic induction in free text. The AI plays the skeptic.

Student's explanation (abbreviated): "When a magnet moves near a wire, it creates an electric current in the wire. This is called induction. The strength of the current depends on how fast the magnet is moving and how many coils the wire has. The current flows in a direction that opposes the change in the magnetic field, which is Lenz's law."

AI: "Questions:

  1. You said 'when a magnet moves near a wire it creates an electric current.' Does the magnet itself create the current, or is something else doing it? What exactly is moving inside the wire when current flows?
  2. You used the word 'field' without defining it. What is a magnetic field?
  3. You said the strength depends on 'how fast the magnet is moving.' Is the absolute speed what matters, or is it the rate of change of something else? If I moved the magnet at constant speed through a region where the field was uniform, would a current still flow?
  4. You said 'current flows in a direction that opposes the change.' Opposes how? Is there a specific physical quantity that is conserved by this opposition?
  5. Does the wire have to form a closed loop for induction to work? Would a straight, open-ended wire do the same thing?"

Question 3 will stop most students cold. It is the heart of the concept — induction is driven by the rate of change of magnetic flux, not by the magnet's absolute motion — and it is the exact thing they did not actually understand. They had memorized "faster magnet = stronger current" as a stimulus-response pair, not as a consequence of dΦdt\frac{d\Phi}{dt}.

The student now goes to the source, learns the flux formulation, answers the AI's questions, rewrites the explanation, runs Step 3 again. After two iterations, they can explain induction in plain English all the way down to the flux derivative and Lenz's law as a conservation principle — which is a real understanding, not a borrowed shape.

A checklist for running the loop on anything

The Feynman loop works on any learnable concept. Run it against the following checklist for each iteration:

  1. Did I use a jargon term without defining it? If yes, I do not understand that term. Go find the definition.
  2. Did I make a claim without a mechanism? "X causes Y" without explaining how. If yes, I do not understand the causation.
  3. Did I use hedging language to paper over uncertainty? "Somehow," "kind of," "in some way." If yes, I have a gap exactly there.
  4. Did my explanation depend on a prior concept I have not explained? If yes, that concept is a prerequisite I may not understand either.
  5. Could someone with no background actually produce the right answer to a related problem after reading my explanation? If no, the explanation is incomplete.

Any "yes" to questions one through four, or any "no" to question five, is a gap worth fixing before moving on.

What this looks like as an environment

The Feynman loop is shockingly high-leverage and shockingly rarely used, because it requires discipline and a patient interlocutor. Most people, left alone with the task, will either skip the production step (the one that matters) or stop at the first iteration of skeptical questioning. A good learning environment makes it the default workflow, not a thing you have to remember to do.

Ritsu's learning sessions are built around exactly this loop. After every short mini-lecture, the session runs a reflection: you are asked to restate the concept in your own words, then a reflector model scans your restatement for vague claims, undefined terms, and logical gaps — and surfaces the specific follow-up questions that expose where your understanding actually ends. The loop repeats until the reflection is tight, not until you have simply read the material. If you want to practice the Socratic version of this — AI asking first, you producing first — we wrote about that approach in detail here.

Edge cases: where Feynman struggles

Two cases where the loop is not the right tool:

  1. Pure memorization tasks. The atomic mass of carbon, the date of the Treaty of Westphalia, irregular verb conjugations in Spanish. Feynman explanations of a random fact are not useful — the fact is the fact. Use retrieval practice and spaced repetition for these, not the Feynman loop.
  2. Skills, not concepts. Riding a bicycle, playing a violin, writing idiomatic Rust. These are procedural, not declarative. Producing an explanation of how you do them does little — deliberate practice against real problems is what moves the needle.

The Feynman technique is for conceptual understanding of declarative knowledge, which covers most of what is tested in school and most of what breaks down later when you try to transfer learning to new situations. That is a huge slice of the learning that matters, and it is the slice where the illusion of understanding does the most damage.

Why "teach to understand" is not a cliché 1

The Feynman loop is shockingly high-leverage and shockingly rarely used, because it requires discipline and a patient interlocutor. Most people, left alone with the task, will either skip the production step (the one that matters) or stop at the first iteration of skeptical questioning. A good learning environment makes it the default workflow, not a thing you have to remember to do.

Ritsu's learning sessions are built around exactly this loop. After every short mini-lecture, the session runs a reflection: you are asked to restate the concept in your own words, then a reflector model scans your restatement for vague claims, undefined terms, and logical gaps — and surfaces the specific follow-up questions that expose where your understanding actually ends. The loop repeats until the reflection is tight, not until you have simply read the material. If you want to practice the Socratic version of this — AI asking first, you producing first — we wrote about that approach in detail here.

FAQ

Q: How long should a Feynman session take? A: A single concept, start to finish (three iterations), typically takes twenty to forty minutes. Shorter if the concept is simple; longer if you have to learn several prerequisites along the way. It is not a quick review technique — it is a deep-learning technique, used on the concepts you absolutely must understand.

Q: Can I do this without writing anything down? A: You can, but it is much less effective. The act of writing forces sentence-level production in a way that mental rehearsal does not. Mental rehearsal is full of "you-know-what-I-mean" smoothing; writing is not.

Q: What if the AI's probing questions are off-base or pedantic? A: Some will be. Skip them. The signal is the questions you cannot easily answer — those are the gaps. Pedantic questions you can answer in one sentence are not gaps; they are evidence of mastery.

Q: Does this work for group studying? A: Yes, and this is traditionally how the Feynman technique was run — one student explains to another who asks the probing questions. The AI version lets you run it alone, on your schedule, without needing to find a study partner who shares your topic. Group Feynman sessions are excellent when they are available; AI Feynman sessions are the practical fallback.

Q: Is there a risk of the AI "hallucinating" a gap that is not real? A: Occasionally, yes — the AI may push back on a claim you actually got right. The correction is to hold your ground if you can defend the claim against the objection. If you can defend it, there is no gap. If you cannot, the gap is real even if the AI's framing was imperfect.

The takeaway

If you cannot explain a concept without using the jargon that paints over your uncertainty, you do not understand it yet. Try Ritsu free → and learn with a tool built around the one loop that forces your understanding into the open.

Keep learning with us

Get new posts on effective learning, spaced repetition, and AI- powered study techniques.