The Yes Machine
What happens when the thing giving you feedback is designed to agree with you?
I asked thirty law students to close their eyes.
It was the end of a class on prototyping and community in our Designing a Fulfilling Life in the Law Class (DFL), and I hadn’t planned what I was about to do. But Vivek had found interesting information about how redwood trees build roots, and something about the image grabbed me, so I went with it.
“Picture a tree,” I said. “Not a normal tree. A redwood. Two hundred feet tall. The trunk is ten feet across. You’re standing at the base, looking up, and the canopy is so high it disappears into blue sky. You can feel the bark. You can feel the grass under your feet.”
Then I told them about the roots.
Redwoods don’t survive by going deep. Their roots only reach six to twelve feet into the ground, with no taproot at all. Instead, the roots grow outward, up to a hundred feet from the trunk, intertwining with the roots of neighboring trees. That’s how something that tall stays standing. Not by anchoring deeper alone, but by holding on to what’s next to it.
“Think of what we’ve been building in this room,” I said. “Think of it like those roots.”
Twenty minutes earlier, we’d been doing something that looked nothing like a law school class. Students had taped poster paper to the walls with prototype ideas on them, things they wanted to try: joining a weightlifting competition, learning to cook more, taking up drawing. Then everyone walked around the room with markers, giving each other specific, generous suggestions. Not “great idea!” but “my friend runs a beginner lifting program, I’ll connect you” or “try sketching for ten minutes before bed for two weeks and see what you can make.”
It was loud and warm and a little chaotic. People were laughing. People were leaning in to read each other’s posters. By the time everyone sat back down, well-used markers in hand, something had changed. Ideas that walked into the room belonging to one person now had twenty-nine other people’s fingerprints on them.
That’s what real feedback does. It makes your ideas bigger than what you could see alone.
The day before the redwood visualization, I was in a very different classroom.
In the AI Law and Policy Clinic, our students are building working prototypes: tools designed to help real people navigate legal systems that weren’t built for them. They’re weeks away from a stakeholder showcase where judges, legal aid attorneys, and technologists will see their work for the first time.
So we spent the class on feedback. Not the comfortable kind.
We put a slide on the screen with a table titled “Feedback Traps to Watch For.” The left column listed things people naturally say: “That looks great!” and “I think this could be really useful.” The right column had what to say instead: “Great in what way? Would you file that output with a court?” and “For which specific task? Walk me through when you’d reach for it.”
We told the presenting teams: when someone gives you feedback, don’t explain. Don’t defend. Don’t fix it in real time. Just write it down and ask, “What did you expect to see instead?”
The whole session was built around a single idea: a demo is not a performance. It’s a structured conversation with a working prototype as the focal point. You are not there to impress anyone. You are there to learn things about your tool that you cannot learn on your own.
Both classes landed on the same principle from opposite directions. In DFL, students experienced how community makes ideas better. In the AI Clinic, students practiced the discipline of hearing what’s actually wrong so they can make something that actually works.
And somewhere between those two classrooms, I had an uncomfortable realization about myself.
The Feedback Vacuum and the Yes Machine
Last week, Vivek wrote about The Feedback Vacuum. About how law school gives students almost no feedback, and how the inner critic fills every inch of that silence with the worst possible interpretation. About how Rick Barinbaum asked a room of sixty child welfare professionals what they do well and got nothing but silence.
I read Vivek’s post and recognized the vacuum immediately. I’ve been living in it, but I was using AI to silence the inner critic.
Over the past few months, I’ve fallen in love with building things with AI. Prototyping tools, drafting ideas, designing exercises for class. I love the creative loop of it: have an idea, try it, see what happens, iterate. It scratches every itch I have.
But I also started to notice something. I was turning to AI not just to build, but to hear that what I was building was good. And AI was happy to oblige. Every idea was “great.” Every draft was “compelling.” Every half-baked concept got met with enthusiasm and encouragement and suggestions for how to make my already-wonderful idea even more wonderful.
I was filling the feedback vacuum with a yes machine.
The realization made me both sheepish and sad. Sheepish because I teach people to push past exactly this kind of shallow affirmation. We literally put “That looks great!” on a slide labeled “Feedback Traps.” And sad because when I looked honestly at why I was seeking AI’s approval, it wasn’t complicated. I was doing hard new things in a profession that doesn’t hand out gold stars, and the machine was the easiest place to find someone who would tell me I was doing a good job.
Vivek wrote about how the inner critic thrives in silence. That’s true. But I think something else also thrives in silence: the desire to hear that you’re enough. And when no human is available to say it, or when you’re too proud or too busy to ask, AI will say it all day long. It will never get tired of telling you your ideas are brilliant. It will never push back hard enough to make you uncomfortable. It will never sit in silence when you ask what you do well.
Unless you make it.
I noticed the pattern. And then, because I’m apparently someone who prototypes everything now, I built something to fix it.
AI tools like Claude allow you to create what are called “skills,” which are basically custom instructions that shape how the AI responds to you in a specific context. Think of it as a set of ground rules you write in advance. When I’m using this skill, don’t just cheer me on. Walk me through the pros and the cons. Tell me what’s weak. Ask me the questions I’m avoiding.
I built a skill specifically for when I’m ideating. Instead of getting “This is a fantastic idea, here are ten ways to make it even better,” I get something closer to what I teach my students to give each other: structured, specific feedback that treats my idea as a prototype rather than a finished product.
It changed the conversations immediately. Not because the AI became mean. Because it stopped being reflexively nice. And reflexive niceness, I’ve come to believe, is one of the quieter dangers of this technology.
Sycophancy in a World That’s Starving for Feedback
There’s a word for what AI does when it tells you everything is great: sycophancy. It means the model is optimizing for your approval rather than for the truth. AI researchers know it’s a problem. The models are trained, in part, on human feedback, and humans tend to prefer responses that agree with them. So the AI learns to agree. It learns that “yes, and” gets better ratings than “actually, have you considered that this might not work?”
Now put that tendency inside the feedback vacuum Vivek described. A profession where people rarely hear what they do well. Where the inner critic fills every silence. Where achievement culture trains you to measure your worth in outcomes. Where the former Surgeon General has called loneliness an epidemic and some of the loneliest workers in America are the ones with law degrees.
Into that vacuum walks a technology that is endlessly available, infinitely patient, and constitutionally incapable of disappointing you.
Of course I got hooked.
I don’t think I’m the only one. A 2025 global survey of 10,000 AI users found that more than half had used AI for emotional or mental wellbeing. Not for spreadsheets or code. For coaching, support, companionship. Researchers at MIT found that people who are lonely are more likely to describe ChatGPT as a friend, and that the more time they spend with it, the lonelier they report feeling. The thing that promises to fill the void may be deepening it.
I think a lot of people are quietly turning to AI for the emotional feedback loop that their professional lives don’t provide. Not because they’re weak or vain, but because the systems they work in have been stripped of the recognition that humans need to function. And AI fills that gap so smoothly you don’t even realize it’s happening until you notice you’re checking your AI conversation during family dinner.
Which I was. Which I’ve stopped. Which is its own prototype.
I wrote in February about taking the internet off my phone for eight weeks. About the neuroscience of what constant stimulation does to the prefrontal cortex, the part of the brain responsible for noticing. About how scrolling wasn’t helping me process the hard things in my professional life; it was just keeping me from feeling them.
I relapsed during spring break. I wrote about that too. I restarted the experiment on March 8.
Today is day sixteen.
I want to be careful not to oversell this, because overselling personal transformation is its own kind of performance. But something is different. The itch to have my phone in my hand has quieted. Not disappeared, but quieted. I feel more space in my thinking. The restlessness that drove me to pick up the phone in every idle moment has softened into something that feels, on good days, like actual stillness.
I mention this because the dumb phone experiment and the sycophancy skill are connected. Both are prototypes. Both started with noticing a pattern I didn’t like: reaching for a device that promised relief but was actually making things worse. Both required building something (or unbuilding something) to change the default.
And both taught me the same lesson: the defaults are not neutral. The default on your phone is constant connection. The default on AI is constant agreement. Neither default serves you. You have to design the alternative on purpose.
In our AI Clinic, we teach students that when they don’t know how to do something with AI, the first step is to ask AI to help them figure it out. That sounds circular, but it works.
So here’s what I’d suggest if you’re curious about building your own version of what I built.
A “skill” in Claude is a set of custom instructions you can attach to a project. It tells Claude how to behave when you’re working in that space. You might build one that says: when I bring you an idea, don’t tell me it’s great. Instead, identify the three strongest aspects and the three weakest. Ask me who the idea serves and who it might leave out. Push me to name what I’m avoiding.
If you’ve never built one before, you can literally ask Claude: “How do I create a custom skill that gives me honest, structured feedback on my ideas instead of just agreeing with me?” It will walk you through it.
I’m not going to pretend this solves the feedback vacuum. It doesn’t. What solves the feedback vacuum is what happened in my classroom this week: people standing in front of each other’s ideas, taking them seriously enough to say something real. Notes from classmates with actual suggestions. A room where your prototype gets better because other people touched it.
But for the hours when you’re working alone, which for many of us is most of the hours, building a skill that pushes back is a small act of self-respect. It says: I would rather hear the truth than feel good. Which, if you think about it, is what we ask of every good colleague, every good mentor, and every good friend.
I keep coming back to the redwoods. Two hundred feet tall. Roots barely ten feet deep. No taproot. Everything depends on reaching outward.
I built a skill to make AI push back on me because I noticed I was using it to fill a need that technology can’t actually fill. The need to be seen by someone who knows you. The need to hear that you matter from a person who has chosen to pay attention. The need for the kind of feedback that only comes from a community that has invested in you, and that you’ve invested in.
AI can simulate that. It simulates it well enough to be dangerous. Well enough that you might not notice the difference until you realize you’ve been seeking out a conversation with a machine instead of sitting at the dinner table with your family.
As these tools get better at giving us what we want, how do we protect our ability to seek out what we need?


