Skip to content
Opinion 9 min read

Stop Watching AI Tutorials. Open a Playground Instead.

Watching AI tutorials is passive learning. You retain about 20% and never build real intuition. Here's why a playground beats 40 hours of AI videos.

A
Abraham Jeron
April 22, 2026

TL;DR

  • Watching AI tutorials is passive pattern recognition. You process information receptively and retain roughly 20% a day later.
  • A playground forces generative processing: you make predictions, run experiments, and build mental models you can actually pull out under pressure.
  • We tried theory-first onboarding at Kalvium Labs. Less than 5% of engineers made it through a full theory screen before jumping to the playground.
  • Open Exercise 1-1 at app.tinkerllm.com, type 'Jana Gana M', and watch what happens. Sixty seconds. No tutorial needed.
  • Videos are fine for surveys and inspiration. They fail for mechanical skill acquisition. That distinction is worth holding.

We had a new engineer join Kalvium Labs in early 2025. Before their first sprint, they’d gone through roughly 40 hours of YouTube AI tutorials. They knew the vocabulary cold: temperature, top-K, few-shot prompting, RLHF. They’d watched all the major channels. Multiple playlists.

At our first code review, I asked one question: what does setting temperature to 0 actually change in the model’s behavior?

Long pause.

Not a thinking pause. An “I know I’ve heard this somewhere” pause. They could picture the slide. They could picture the presenter hovering over the diagram. But the connection between the concept and the code wasn’t there. Forty hours of watching had produced a lot of vocabulary and zero working intuition.

That engineer isn’t unusual. They’re the norm.

The Point, Stated Early

Tutorial-watching is passive pattern recognition. You watch a prompt happen, understand it in the moment, close the tab, and retain maybe 20-30% a day later. That’s not a character flaw. That’s how passive learning works.

Every working AI engineer I know learned by opening a terminal or a playground, typing something, and being surprised by the output. Not by watching someone else be surprised.

The fix isn’t a better video. The fix is sending the prompt yourself.

Why Tutorials Feel Productive But Aren’t

There’s a specific mechanism here, not just a vibe.

When you watch a tutorial, you process information receptively. Your brain recognizes the concept as familiar, and recognition feels like comprehension. You think “yes, I get that” and move on. But recognition and retrieval are different things. Recognition fires when the concept is in front of you. Retrieval fires when you need it at a code review with nothing on screen but a parameter name.

You also never form the motor memory of actually doing the thing. Changing temperature from 0.9 to 0.1 and watching the output shift from creative to repetitive is a 10-second experiment. Watching someone else do it while talking over it takes 8 minutes. And your hands stay still the whole time.

And you only ever see the demo path. The tutorial creator recorded the version where things went right. You don’t see the run where temperature 1.5 produced absolute garbage, or where the model responded in the wrong language because a system instruction conflicted with the user prompt. Those edge cases are where your intuition gets built.

The research backs this up. Freeman et al. ran a meta-analysis of 225 STEM studies comparing active and passive learning and found that students in traditional lecture formats failed at 1.5 times the rate of students doing active work. The effect held across disciplines and class sizes. Knowing this won’t make a YouTube tutorial feel less comfortable. But it explains why you feel confident watching and confused when you try to apply it later.

What a Playground Gives You

The thing a playground does that a video can’t: it creates adversarial curiosity.

You set temperature to 0. You see a consistent output. You wonder, naturally: “What happens at 2.0?” So you change it. The output stops making sense. Now you’ve discovered the practical ceiling through direct observation, not because a presenter told you about it. That observation sticks in a way the explanation never would.

You also get the parts tutorials skip. Real model latency: that pause before the first token appears. Real token costs: the counter ticking up as your prompt grows. The experience of watching the model confidently answer “What’s the pincode of Hogwarts?” with an invented number, and recognizing immediately that you can’t trust confident-sounding responses about things a model wouldn’t know.

These aren’t things you learn from description. They’re things you learn from observation.

We built TinkerLLM’s curriculum around this directly. Theory doesn’t sit in a separate module you have to clear before the playground unlocks. It lives inside the exercise flow. You read three sentences about tokens, then type “Supercalifragilisticexpialidocious” and watch the token counter respond. You read and experiment in the same scroll. That interleaving is intentional, and we built it that way because we tried the other approach first.

Try it yourself: Exercise 4-1 (Brainstorming) runs at temperature 1.0 and asks for names for a pet rock. Run it once. Then run it again without changing anything. You’ll get different names both times. That’s what high temperature actually means, observed directly in 30 seconds. Open app.tinkerllm.com and try it.

The Wrong Turn We Made

In the first version of TinkerLLM, we built theory modules first. You’d see a screen explaining tokenization, read through it, then hit the playground. Standard structure. Logical.

We tested it internally with CS interns and junior engineers at Kalvium Labs. Less than 5% made it through a full theory screen before jumping straight to the playground and just trying things. The reading got skipped. Every single time.

That wasn’t laziness. That was signal. People wanted to send prompts. The theory module felt like waiting for permission to do the interesting thing, and most of them just skipped the formality and started experimenting.

So we rewrote the whole exercise structure. Theory is now interleaved sentence by sentence inside the exercise itself. Three sentences of context, then the prompt to try. That one structural change is probably the most important design decision we made. The full build story is in how we built TinkerLLM.

But the point here is broader than our product. That 5% number is about tutorial-based learning in general. When you give people a theory module and a playground, they go to the playground. Your job as a learner is to get there faster.

How to Actually Learn Any AI Concept

This is the process I’d give anyone who wants to understand something about LLMs:

  1. Find a playground. Google AI Studio, OpenAI’s playground, or TinkerLLM all work. Pick one and open it.
  2. Pick one parameter. Not five. One. Temperature, max tokens, or top-K.
  3. Make a prediction. Before you change anything, write down what you think will happen when you adjust it.
  4. Change it. Run a prompt. See what actually happens.
  5. If your prediction was wrong, dig into why. The gap between what you expected and what you got is where the learning is.
  6. Do it again for the next parameter.

Six steps. No video required.

The whole reason step 3 matters: making a prediction forces generative processing. Your brain commits to a belief and then compares it to reality. Passive watching never forces that commitment. You float through the explanation nodding, and there’s nothing to bounce the reality off of.

Try it yourself: Right now, before anything else, predict: if you set temperature to 0 on any prompt and run it five times, will the outputs be identical or just similar? Then go to app.tinkerllm.com, try it, and check if you were right. Five minutes. No API key needed for the first two lessons.

The 60-Second Test

Try it yourself: Open app.tinkerllm.com. Navigate to Exercise 1-1 (Anthem Completion). It’s free, no API key required. Type “Jana Gana M” into the prompt field and submit it.

Watch what happens.

The model completes India’s national anthem, picking up mid-word, with the right words in the right order. It’s not searching a database. It’s predicting the most likely continuation of that character sequence, based on patterns from its training data. That’s the mental model of LLMs as prediction engines, demonstrated in 60 seconds. Not explained. Demonstrated.

Now think about how many YouTube tutorial minutes you’d need to develop the same intuition. Ten? Twenty? And after those 20 minutes, would you have sent the prompt yourself, or just watched someone else do it?

That gap is the whole argument.

Videos Have Their Place

To be fair: videos are fine for some things. A 15-minute overview video is an efficient way to survey a field you’re unfamiliar with. Seeing someone’s full workflow end-to-end is easier to absorb on video than in text. If you’re deciding whether a tool even does what you need, a demo video is faster than setting one up yourself.

The problem is treating survey knowledge as skill acquisition. Watching a tutorial on temperature builds survey knowledge. Setting temperature to 0 and 1.5 on the same prompt builds skill. They feel similar in the moment. They aren’t.

Use videos to decide what to learn next. Use a playground to actually learn it.

FAQ

But I learn better by watching. Isn’t that fine?

If you mean you need to watch someone try something before you try it yourself, that’s completely reasonable. Use the video as a pre-flight briefing, then open a playground within 10 minutes of finishing it. The problem isn’t video as a starting point. The problem is treating watching as the endpoint. If you watched a tutorial on temperature last week and can’t explain what it does right now without looking it up, the passive watching didn’t stick. That’s the honest test.

What playgrounds should I use?

Google AI Studio is free with a Google account and gives you direct access to Gemini models with all the major parameters visible. OpenAI’s playground is good for working with GPT-4o. TinkerLLM is designed for structured learning, with 26 free exercises that teach specific concepts in a deliberate order. For raw experimentation with no structure, AI Studio is the fastest to start. For a guided progression from basics to advanced, TinkerLLM.

Is TinkerLLM actually better than free tutorials?

That depends on what you mean by better. Free tutorials on YouTube cover more surface area. TinkerLLM covers one thing well: LLM fundamentals from tokenization through hallucinations and sycophancy, with exercises you complete using a live model. The 26 free exercises cover more real ground than most 3-hour tutorial series because you’re doing things instead of watching them. If you finish Lesson 2 (Tokens) and still feel like YouTube is teaching you more, that’s useful information. Try both and see which one produces working intuition faster. Browse the full curriculum here.

How long until I actually understand temperature?

Probably 15 minutes with a playground. One exercise where you run the same prompt five times at temperature 0 (identical outputs), then five times at temperature 1.0 (different outputs every time). That experience wires the concept in a way that a 10-minute explanation won’t. The temperature explainer post has the full mechanics if you want the math behind why this happens. But the intuition comes from doing it, not from reading about it.

What if I don’t have an API key?

You don’t need one to start. TinkerLLM’s Lessons 1 and 2, all 26 exercises, run without an API key. That’s the whole LLM basics and tokenization section. You get a working feel for the model before you ever need to set up an account anywhere. When you’re ready to unlock Lessons 3-18, you get a free Gemini API key from Google AI Studio in about 5 minutes. The free tier is generous enough that most students going through the paid exercises won’t pay anything for the API calls on top of the course fee.

Do I need to code to use a playground?

No. Every playground mentioned here, including TinkerLLM, is a text interface. You type a prompt, change a slider or a number, and read the output. Nothing to install, no programming required. Coding knowledge starts to matter when you’re building applications on top of the API. For understanding how LLMs work, the playground UI is all you need. Exercise 1-1 through about 3-5 require exactly zero coding background.

Won’t the model just agree with everything I try, because of sycophancy?

Sycophancy means the model agrees with your stated positions when you’re asking it to evaluate something. It doesn’t change what the model outputs when you’re just running experiments. When you set temperature to 2.0 and the output becomes unreadable nonsense, that’s real, regardless of how confident you sound. The failure is in the text box, not subject to the model’s agreeableness. Running those failure cases yourself is exactly how you build an honest mental model of what LLMs can and can’t do. We have a whole post on what sycophancy actually looks like in practice, including four prompts that should have triggered skepticism and didn’t. Read it here: We Told AI We Solved P=NP. It Believed Us.


The first two lessons at app.tinkerllm.com are free. Twenty-six exercises. No API key, no credit card. You’ll know within the first three whether learning this way works better than watching someone else do it.

learn AI AI course playground learning hands-on learning AI tutorial prompt engineering LLM fundamentals
A
Abraham Jeron The Builder

Engineer at Kalvium Labs. Shares build stories, what went wrong, and what shipped. Writes from the trenches of AI product development.

LinkedIn

Want to try this yourself?

Open the TinkerLLM playground and experiment with real models. 26 exercises free.

Start Tinkering