What Counts As A Win
Lessons from another field about a word we use too easily
I’ll never forget the press conference. A trafficker had been sentenced to over 20 years for sex trafficking my client and others, and reporter after reporter described it as a huge success, a model case, proof that the system worked.
I was the attorney for one of the victims, and it did not feel like a success.
We had fought and lost so many smaller battles in that case that I had stopped counting. My client did not want to participate. She did not want to board a plane while carrying a medically complex pregnancy. She did not want naked pictures of herself shown in open court without her permission. She did not want to be told by a prosecutor that, if she refused to do those things, he could charge her.
That is the actual reality of a case still held up as a triumph in the fight against human trafficking. Almost two decades into this work, I am willing to say it plainly: if our only criterion of success is putting a trafficker in prison, this case earns an A plus. By that single metric, it is exactly what success looks like.
However, I am not comfortable with that being the only metric, and I do not count something that treats a trafficking survivor that way as a success. Not without a serious asterisk.
I have spent fifteen years trying to convince others in the anti-trafficking field that we have a “success” problem. We use the word as if we all share its meaning. We don’t. A long jail sentence can be a success for a prosecutor and a catastrophe for the person whose body and trauma the case was supposedly about, and until we slow down and ask whose definition of success we are actually using, we keep celebrating outcomes that re-traumatize the very people we said we were trying to help.
Now I am beginning a new chapter in a field where I am very much a beginner, AI in the legal system, and I am worried. Specifically, I am worried that I am about to walk into the exact same blind spot I have spent my career identifying and battling in another field.
I recently read A Research Agenda for Justice Technology by Jason Tashea, who has spent years writing and thinking about technology and justice. He notes that there is no agreed-upon definition of what “success” means for a justice technology, and that in the absence of one, evaluators default to efficiency metrics, user counts, and web traffic, none of which tell you whether a person was helped.
As someone who fought this issue in human trafficking, I missed it when I walked into this new space. In the anti-trafficking world, we default to convictions and sentence length because those numbers are easy to count, and when we achieve them, they make us feel like we are “doing something.” In the legal tech world, the field defaults to deployments, dashboards, and pageviews because those numbers are also easy to count, and we feel like we are “doing something.” In both cases, the people the work is supposedly for can be quietly erased from the scoreboard.
I started this semester, our first semester in the AI Law and Policy Clinic at Michigan Law, wanting to do good. That is also what most people in the anti-trafficking field want. They want to do good. I wanted to take what AI had already done for my own work, the way it had made certain tasks faster and freer, and replicate that for hundreds, maybe thousands, of people.
What I quickly discovered, and what Tashea’s research lays out with painful clarity, is that this field is full of complexity, that there is barely any evidence that the tools being celebrated actually work for the people they were built for, that there are very few honest accountings of what has failed and why, and that there are courts buying tools whose vendors will not say what they actually do, with no benchmarks against which to measure them.
In other words, the field of legal AI is operating without any agreed-upon definition of success.
In the middle of all of that, in class after class, our co-teacher Brian Perron kept saying the same thing to our students. You have to know what success looks like before you build anything, so that you can build the measurement of success into the thing itself.
The first time he said it, I have to admit, I thought, That sounds nice. I did not know how to do it. I was not sure it could be a hard requirement.
I was wrong, it is the foundation. Because I know how the story ends if we skip this part, it ends with the press conference above. I was someone trying to do good in the trafficking field, sprinting toward the doing-good part and not pausing to ask whether the way I was defining good might leave a victim behind, and I want to be very honest about that. The point is not that the people doing this work are bad, the point is that good intentions, without a careful definition of success, can do damage.
So, after one semester, Vivek and I have taken a step back.
We are not in a position to tell the broader legal tech or AI field what success should look like for them. I do not have that authority and I am not interested in claiming it. But I do want to encourage people in this space to pause, to stop sprinting toward whatever feels like progress and ask, before the next demo, before the next pilot, before the next partnership, what success would actually look like if we did this well.
For us, in the AI Law and Policy Clinic, the shape of it is starting to become clear.
I think we got some things right from the start. We want law students engaged with these tools, we want them to learn how AI works and when it doesn’t and why, and we want them to understand the limits of these systems and the points at which a human has to step in. I think we did that reasonably well in our first semester. The harder question is what success looks like in relation to the broader legal world we want to serve, and this is where we have made a real choice.
For us, success means going deep and staying.
I keep returning to my human trafficking work as a guide. At the beginning of that work, I did not have a strategy, I simply made a decision. I would stay in the game. I would represent my clients in all of their complex legal needs, identity theft, child custody, criminal defense, immigration, whatever came, and I would not always know how to fix it, but I would figure it out, and I would endure.
When I say “I,” I mean the clinic. I mean my students. We endured.
That is what we want to do here.
We do not want to build a clinic that produces a stream of impressive-sounding tools that nobody adopts, and we do not want to add to the pile of legal tech projects that get launched, photographed, and forgotten. We want to partner deeply with courts, and we want to stay so long that they get a little tired of us and wonder why we are still around. I am being slightly playful, but only slightly. I have human trafficking clients I have represented for more than fifteen years, and I want our clinic to stand alongside court systems, judges, and the people staffing self-help and legal resource centers for that kind of time.
We want to stay when it is messy, when it is hard, and when we do not yet know how to get to success. We want to build, together with each system, what success looks like for them.
And because all of this is grounded in the larger project of “more time to be human,” we want our definitions of success to include questions like these.
What does it look like to reduce frustration for the person trying to navigate a system that was never built with them in mind?
What does it look like to create time and spaciousness, instead of consuming whatever time AI claims to give back?
What does it look like for a person, especially someone navigating the legal system without a lawyer, to walk away feeling seen and heard, not just having received an answer?
How might we use this moment to define success in terms of human need, rather than whether a court has produced an output?
I do not have crisp, or even messy, answers to any of those yet; we are at the beginning of trying to build them, and I am sure we will get pieces of it wrong. But I am no longer willing to skip the step where we ask what we are actually trying to do.
I learned, the hard way, in another field, that “success” is not a neutral word. It is a choice. A choice about whose experience counts, whose pain we are willing to overlook, and what we are prepared to ignore in order to have a splashy press conference.
Whatever you are building, or buying, or piloting, or celebrating in the legal AI space, I want to leave you with the question Brian kept asking our students.
What does success look like if we do this well and how do we measure it?
And just as importantly, who decided?


I think Sateesh Nori's post speaks to this, and as I read his post this morning, I thought of Jason's post, too. There's a convergence happening here that is shining a bright light. https://sateeshnori.substack.com/p/agency-the-invisible-problem-in-access
We're 3 years into the AI lab at VLS and before we launched the practicum (semester-long projects with students), I realized that building things with external partners over the course of the semester wasn't the way to go. I still haven't articulated it as clearly as you do in this post, or as well as Sateesh or Jason do. But it's the feeling I've had all along.
It's also the reason why I've never liked hackathons.
All of these approaches spend far too little time and effort in the problem, in the lives of the people who are challenged, and in the systems and communities that ultimately bear the burden of making things better.
For these reasons (and others), we've focused on being a "learning lab" for our students and communities we're lucky to work with. Instead of a "building lab." Someday the two may coexist but right now we must do a lot more learning before we start building.
Love following your journey here. Thank you for sharing!