A self-taught developer messages me: "I've watched the full Wes Bos React course, the Stephen Grider one, Jonas Schmedtmann's course, and half of Kent C. Dodds' Epic React. I've taken notes on all of them. But when I sit down to build my portfolio project from scratch, I freeze at npx create-next-app. I don't know what's wrong with me."
Nothing is wrong with him. He has just discovered, the hard way, that watching someone else solve a problem is almost entirely uncorrelated with being able to solve that problem yourself. This is not a character flaw or a lack of "grit." It is a specific, well-documented cognitive mismatch between how tutorials are built and how skills are acquired. The name for his failure mode, in developer folklore, is tutorial hell — the state of consuming educational content indefinitely without ever producing the thing the content was ostensibly about.
AI has made tutorial hell worse in a sneaky way. Instead of passively watching, learners now have conversational tutorials on demand. "Explain Redux to me." "Walk me through how useEffect works." "Show me how to build a shopping cart." Each conversation feels productive. Each conversation produces zero actual skill, because skill is not a thing you acquire by watching or by chatting. It is a thing you acquire by trying, failing, and trying again under conditions that force your cognition to adapt.
The good news is that AI is genuinely useful for learning to code — but only in a completely different mode than the one most learners default to. This post is about that mode.
The deliberate practice gap
Anders Ericsson, the Swedish psychologist who spent his career studying expert performance, is the originator of the phrase "deliberate practice" — and also, tragically, the source of the misunderstood ten-thousand-hours rule that Malcolm Gladwell later popularized. Ericsson's actual finding was narrower and more useful: experts do not acquire their skill by accumulating experience. They acquire it by engaging in a specific kind of practice defined by four features:
- Specific, well-defined goals. Not "get better at React" but "ship a component that handles the three failure modes of async data loading."
- Full attention and conscious effort. No multitasking, no "learning while watching YouTube in the background."
- Immediate, high-quality feedback. Ideally from a coach or an environment that surfaces errors fast.
- Repetition with refinement. You do the thing, notice what went wrong, adjust, do it again.
Tutorials fail all four criteria, often simultaneously. The goal is "follow along." The effort is passive. Feedback is nonexistent — the tutorial's code works whether or not yours does. And there is no repetition because the second time you watch, you are watching, not doing.
The practical effect is that tutorials build a specific, limited skill: the skill of understanding code when you read it. They do not build the skill of producing code when you face a blank editor. Those are two different mental muscles, and the second is the one employers, clients, and your own portfolio actually care about.
What "learning to code with AI" actually means
The default mode — "AI, explain this concept" or "AI, write this for me" — is tutorial hell with a chatbot UI. It is not practice; it is consumption. To escape, you have to push every interaction with AI into one of two roles: scaffolded problem-setter or on-demand mentor. Never "tutor" in the passive sense.
Role 1: scaffolded problem-setter
The AI's job is to give you a problem just above your current level, with enough constraint to force learning and enough looseness to allow genuine production.
Good prompt:
I've built a basic React to-do app with useState. I understand useState, useEffect, props,
and basic forms. I've never used context or any kind of state manager.
Give me a small project (2–3 hours) that would force me to need one of those tools.
Describe the user-facing requirements, the 2 or 3 components I'll probably need, and the
data the app has to keep in sync. Do NOT write any code. Just the spec.
The AI returns a spec for, say, a theme-switcher plus a settings panel plus a main content area that both need to read the theme. You build it. You hit the prop-drilling pain. You then have earned the right to ask about context.
Role 2: on-demand mentor
Once you have hit a genuine wall — not a spec-gap, an actual error or design question — you invoke the AI as a mentor. The discipline is to ask in a way that forces you to continue producing:
Bad prompt:
Why isn't this working?
[500 lines of code]
Good prompt:
I'm trying to get my useEffect to refetch when `userId` changes. It runs on mount
but doesn't run when userId changes from a button click. I think useEffect's
dependency array should fix this but it isn't. Here's the code:
[30 lines of actually relevant code]
Don't fix it for me. Walk me through what I should check, in order, to diagnose it myself.
Only if I come back with "I tried A, B, and C and none worked" should you give me the fix.
The difference is that the bad prompt produces a fish. The good prompt produces a fishing rod and the discomfort of using it. After three or four real debugging sessions in the mentor mode, you will have internalized the diagnostic pattern — you will not need to ask it of AI anymore for that class of bug. That is what learning actually looks like.
A concrete session, in transcript form
Here is a real example of the mentor mode on a tricky async state bug. Notice that the student produces every piece of the eventual fix — the AI only narrows the search.
Student: "My React component re-renders infinitely when I put
fetchinsideuseEffect. I added a dependency array but it still loops. Code:
const [users, setUsers] = useState([]) useEffect(() => { fetch('/api/users').then(r => r.json()).then(data => setUsers(data)) }, [users])
Don't fix it. Walk me through diagnosis."
AI: "Okay. What exactly do you think
useEffectis watching for in that dependency array?"
Student: "
users— I want it to refetch when users changes."
AI: "Right. Now trace the sequence: effect runs → fetch returns → you call
setUsers(data)→ what happens next?"
Student: "The component re-renders because state changed. Then useEffect… oh. It runs again because
usersis in the dependency array and it just changed. So every fetch triggers another fetch."
AI: "Exactly that. Given that diagnosis, what belongs in the dependency array here?"
Student: "Nothing — I only want this to run once on mount. Empty array."
AI: "Correct. Now, a harder follow-up: when would you legitimately want
usersin a dependency array?"
The student wrote that fix. It will take them a fraction of the time to diagnose this class of bug next time, because they produced the diagnostic chain themselves. Contrast this with the bad-prompt version, which would have returned "change [users] to []" and left zero cognitive residue.
The practical protocol
Put together, here is the twelve-week version of the escape plan. Adapt it for your own learning target — "web dev," "backend APIs," "data pipelines," whatever.
Week 1 — Pick a real target and a concrete project
Not "learn Python." "Ship a CLI that ingests a CSV, de-duplicates by email, and prints counts per domain." The spec should be tight enough that you can imagine exactly what "done" looks like.
Week 2 — Build an ugly version yourself
Fire up your editor. Use whatever you already know. The code will be bad. Ship it anyway. The goal is to produce a thing to iterate on, not to produce good code. Most tutorial-hell sufferers never get to this step because the tutorials keep promising "soon we'll build something together" — and then the tutorial ends and they are still on the consumption treadmill.
Week 3 — Invoke the mentor mode
Now look at the ugly version and ask yourself: what do I not know how to do better? Pick the single biggest weakness. De-duplication by email in a loop is ? Ask the AI, in mentor mode, to walk you through how to reason about Python dictionaries as lookup tables. Do not ask it to rewrite your code.
Week 4 — Ship v2 with the new technique
Rewrite the weakness, without looking at the AI's abstract explanation. Just the technique name, in your head. If you cannot do it without re-consulting, that is signal you did not learn it yet — go back to mentor mode with more specific questions.
Weeks 5–12 — Iterate
Loop the cycle. Every week, one target weakness, one rewrite, one shipped version. By week twelve you have seven or eight rewrites of the same project, each addressing a different deficiency. That is what a portfolio of learning looks like, as opposed to a portfolio of tutorials.
What this looks like as an environment
The difficulty with the protocol above is that it requires constant discipline to stay out of the AI's default "write the code for me" mode. Most developers drift back within a few sessions. You ask one lazy question, the AI gives a clean answer, the muscle atrophies.
One of the things we built Ritsu around is this exact constraint — when you work through a coding concept in a learning session, the code canvases present the problem, let you produce code against it, and the AI evaluates your attempt instead of writing code for you. If you get stuck, the AI offers a hint, not a solution. The environment is built to keep you in the producer role. See how Ritsu's code canvases work →
Edge cases: when the AI should just give you the code
To be clear — there are real cases where asking the AI to write code for you is the right answer:
- You already know how to do it but you are tired. Boilerplate, config files, TypeScript type definitions you have written a hundred times. Have the AI bang it out. You are not trying to learn; you are trying to get the work done.
- You need an example of unfamiliar library syntax. "Show me the canonical way to set up a Zod schema for a nested object." This is look-up, not learning.
- Production is broken and the deadline was yesterday. Same principle as Socratic mode: pedagogy is for when your goal is to become someone who can solve this class of problem. When the goal is to fix this problem now, skip it.
The rule remains: protocol mode for the thing you want to get better at. Direct-answer mode for the thing you want to finish.
FAQ
Q: Won't this be much slower than just doing tutorials? A: In hours watched, yes. In hours that move you closer to being able to build real software, no — tutorials move you roughly zero hours closer once you have enough basic syntax, which is usually inside the first ten to twenty hours of any language. Everything after that is production.
Q: What about code review? Should I have the AI review my code? A: Yes, and this is one of the single highest-leverage uses. But specify: "Review my code. Tell me what you would change and why, but do not rewrite any of it. If you see a bug, describe the class of bug, not the specific fix." This preserves the producer role while getting the feedback signal that deliberate practice demands.
Q: How do I know I am picking the right project? A: The project is right if you can imagine "done" clearly and you do not currently know how you would implement at least two of the sub-parts. If you already know how, it is not practice — it is production. If you cannot even imagine done, the spec is too ambitious. Shrink it.
Q: What about pair programming with AI in Cursor or Copilot? A: These tools are calibrated for productivity, not learning. They are actively harmful when you are trying to build a skill — they complete your thought before you have finished it. Turn them off during deliberate-practice sessions. Turn them on for production work.
Q: When do I stop being a learner and start being a developer? A: When the "learn it" part of a new task is a small fraction of the total time. Early on you spend 90% of your time learning and 10% producing. At some point it flips. When you are routinely 80/20 producing/learning on real projects, you are a developer. That transition takes most self-taught people about a year of deliberate practice — or three to five years of tutorials.
The takeaway
You do not learn to code by watching someone else code. You learn by writing bad code, breaking it, and asking the right questions about why. Try Ritsu free → and practice in an environment built to keep you on the producing side of the screen.
