Most advice about working with AI boils down to some version of “be specific about what you want.” Write a better prompt. Describe the output in detail. Give clear instructions. That’s fine — but I’ve found there’s a much more powerful move that most people skip entirely.

Show it what good looks like.

The problem with telling

Here’s the thing about describing what you want: you don’t always know. Or more precisely — you know it when you see it, but you can’t fully articulate it. This is especially true for anything stylistic or nuanced.

I ran into this head-on when I started building a new podcast with my friend Cyrus. We wanted the AI-generated scripts to sound like us — not generic podcast-host-voice, but the way Cyrus and I actually talk to each other. The way we interrupt, riff, push back, land jokes.

So I tried telling it. I wrote descriptions of how each of us speaks. I explained our dynamic — who tends to set up points, who tends to land them, how we use humor. I spent a lot of time on this. And the scripts came back… fine. Competent. But they didn’t sound like us.

Then I had a different idea. Instead of describing our voices, I just gave it transcripts of our actual conversations. Real ones, unedited. And the difference was immediate. The AI picked up on patterns I hadn’t even thought to mention — little verbal tics, the way we build on each other’s points, the rhythm of how we go back and forth. Things I couldn’t have described because I wasn’t consciously aware of them.

That’s the core insight: you can’t tell an AI about things you don’t know you know. But you can show it examples that contain those things, and let it figure them out.

Letting AI learn what you didn’t think to teach

I saw this play out even more clearly with my other podcast, What the AI?!, where I co-host with Annie. We have a pretty dialed-in workflow — AI helps generate the script, we record, and then I feed the transcript back in so the system can learn from it.

One lesson it picked up on its own was particularly sharp. We’d recorded an episode where we ran long — too many stories, not enough time — and ended up skipping the last story entirely. The AI noticed this. On its own, it added a check to its script-writing process: make sure the final story in the rundown is skippable. Keep the most important stories earlier in the show so that if we have to cut, we’re not losing something critical.

I never would have thought to write that as an instruction. It’s the kind of operational wisdom that only emerges from watching real work happen — from seeing where the plan met reality and broke down. But because I showed the AI the gap between the script and what we actually recorded, it found the lesson itself.

Why this works

There’s a useful analogy here to how people learn. If you’re training a new hire, you can hand them a style guide and a list of dos and don’ts. That helps. But they’ll learn far more from sitting in on a few meetings, reading a few real examples of great work, and seeing how the team actually operates.

AI is similar. Instructions set a baseline, but examples create understanding. And the richest examples are messy, real-world ones — not polished samples you curated to illustrate a point, but the actual artifacts of your work. Transcripts, drafts, email threads, before-and-after edits. The stuff that captures all the things you know implicitly but would never think to write down.

The practical takeaway

Next time you’re struggling to get AI to produce something that feels right, resist the urge to write a longer, more detailed prompt. Instead, ask yourself: do I have examples of what good looks like?

Feed it past work you’re proud of. Show it the real conversations, not your description of them. Give it the before and after so it can see what changed. Let it find the patterns — including the ones you didn’t know were there.

You’ll be surprised how much it picks up that you never thought to mention.