Episode Summary
Can AI really help close the justice gap—or will it just create “second-tier” services for people who can’t afford lawyers? In this episode of AI and the Future of Law, hosts Jen Leonard and Bridget McCormack sit down with longtime legal aid leader Sateesh Nori and LawDroid founder Tom Martin to explore what happens when AI, public interest advocacy, and legal entrepreneurship collide.
They trace Sateesh’s journey from housing court to legal innovation, unpack Tom’s work on tools like Depositron and Law Answers AI, and challenge the myths that only lawyers can deliver “real” justice.
Key Takeaways
- AI as a Force Multiplier: Generative AI and RAG bots can free legal aid lawyers from routine tasks so they can focus on higher-judgment work and community advocacy.
- Direct-to-Consumer Justice Tools: Projects like Depositron and Law Answers AI show how AI can deliver practical, jurisdiction-aware help to people who will never get a lawyer.
- Rethinking Professional Identity: Many lawyers already do screen-based, repetitive work that AI can assist with—opening space for more truly human lawyering.
- Challenging the “Second-Tier Justice” Frame: Critics worry AI creates inferior services, but most low-income people currently receive no help at all; the comparison point matters.
Final Thoughts
This conversation invites legal professionals to see AI not as a threat to the profession, but as a way to redesign how justice is delivered. If tools like LawDroid, Depositron, and Law Answers AI can give ordinary people a voice and a path to relief, the role of lawyers and courts may shift toward oversight, strategy, and system design rather than routine paperwork.
For anyone working in legal aid, courts, or legal innovation, this episode offers a compelling vision of how AI could finally move us from “serving a lucky few” to building systems that are designed for everyone.
Transcript
Jen Leonard: Hello everyone, and welcome to this edition of AI and the Future of Law. I’m your co-host, Jen Leonard, founder of Creative Lawyers, joined as always by the wonderful Bridget McCormack, president and CEO of the American Arbitration Association. On this podcast, we explore all aspects of the fascinating world of artificial intelligence and the opportunities and challenges it creates for the law.
Today we’re thrilled to be joined by two entrepreneurs, innovators, and advocates for using AI to address the civil justice crisis: Sateesh Nori and Tom Martin.
Sateesh is a lawyer, author, and former law professor, and currently serves as Senior Legal Innovation Strategist at Just Tech. For 20 years, he represented tenants across New York City at various legal services organizations. He served as a commissioner on the 2019 Charter Reform Commission and is currently a member of the ABA Commission on Homelessness and Poverty. Sateesh is a graduate of Johns Hopkins University and NYU School of Law. He writes the Substack newsletter The Augmented Lawyer and is the author of Sheltered: Twenty Years in Housing Court. You can also find his TEDx talk, How a Chatbot Can Stop Homelessness.
Sateesh is joined by his frequent collaborator and fellow innovator, Tom Martin, founder and CEO of LawDroid, a generative AI legal tech company. Tom is also the principal of Deep Legal, a legal AI transformation consultancy. He helps shape the minds of future lawyers as an adjunct professor at Suffolk University Law School, where he teaches generative AI. Tom is the author of Generative AI and the Delivery of Legal Services, a textbook adopted by law schools nationwide. He is an ABA Legal Rebel, a Fastcase 50 honoree, and serves on the ABA Center for Innovation Governing Council. Tom has also launched Law Answers AI and Depositron to expand access to justice.
We’ll talk with Sateesh and Tom about all of these innovations and more in today’s conversation. Welcome to you both—we’re excited to dive in.
Sateesh Nori: Great to be here.
Tom Martin: Yeah, thanks for having us.
AI Aha! Moments
Jen Leonard: We start every episode with a segment we call AI Aha!’s. We hear from a lot of listeners that this helps them think about how they might use AI in their own lives. It’s a fun way to hear how our guests are using AI in their professional or personal lives to do something particularly interesting, fun, magical, or quirky.
Sateesh, I’ll start with you. What’s your AI Aha! that you’d like to share?
Sateesh Nori: This one is a little bit embarrassing. I use AI for a lot of work-related things—social media posts, help with Substack articles, that kind of thing. But recently it was my wife’s birthday and I couldn’t think of a gift. So I asked ChatGPT: “Here’s a little bit about her, here’s my budget. Can you give me 10 gift ideas?” It gave me 10, and I picked one of them.
Hopefully she won’t listen to this podcast, remember what I got her, and put it all together. It’s a little embarrassing, but it gave me a creative outlet when I was blocked. I kept asking myself, what do you get someone for a big birthday? And AI served its purpose really well.
Jen Leonard: Were you impressed with the output?
Sateesh Nori: Yeah, it was very specific. I even entered my location—New York City—and it gave me places where I could get things. It tailored ideas to her interests and worked within my budget. It was incredibly helpful, and the ideas came within seconds. And of course, I had waited until the last minute, so it was very, very useful.
Jen Leonard: I did this for a friend once, and one thing I did not think about was the physical dimensions of the gift. She’s a writer, and she also likes antiques, and ChatGPT suggested I get her an antique typewriter. When it arrived, it was about 200 pounds. Her husband was like, “Thanks a lot.” It’s her favorite gift she’s ever gotten, but they have nowhere to put it.
That’s a great one. Thank you for sharing, Sateesh. Tom, what is your AI Aha!?
Tom Martin: I was thinking about this for a while because there have been a few moments, but one that stands out as really helpful is this: I had the good fortune to be redoing the backyard of our house in Los Angeles. The idea came up to put in a small fountain—not huge, not a big park-style fountain, just something modest.
I was completely stuck on how it would look and what the proportions should be. That kind of spatial judgment always throws me. I’m based in Vancouver, so normally you would stand in the backyard, take in the dimensions, and visualize. But I wasn’t going to fly to LA for every different fountain idea.
So I got a few pictures of potential fountains, and I already had a picture of the backyard. Then I asked ChatGPT to place the fountain in the corner of the yard. It blended the fountain into the backyard image and gave me a visual that let me compare all the different options and see which one felt like it belonged there.
It wasn’t perfect—it got the dimensions slightly off for some of them—but it was a huge win. It let me choose the fountain that made the most sense and really felt like part of our family space.
Bridget McCormack: That’s great. I’ve been trying to do something similar for my office: how to make it more user-friendly but also not have it look like a big messy office, because it’s on the main floor of our house and people see it all the time.
I give the AI the room dimensions, tell it what I need in the space, and what else is going to be in the room—like the fact that I cannot get the big piano out of there. It gives me all kinds of ideas. It’s another way to use AI in regular life that really opens up possibilities.
Main Topic
It’s great to have you both here. I feel like, Sateesh, our lives are intersecting more and more in different projects, and that’s going to be really fun. Tom, I want to find more ways for my life to intersect with yours as well.
It’s wonderful to have this chance to talk to both of you, especially about the things you’ve collaborated on. Before we get to that, let me start with a question for you, Sateesh.
You spent many years—two decades—working in housing law and in direct services, the one-to-one service model that most lawyers practice. I also started my career at Legal Aid in New York and at NYU Law School. We don’t see many people with that background jumping into technology. You’re younger than I am, but when I went to Legal Aid, we had paper file folders.
There was maybe one computer on one floor you could sign up to use for word processing sometimes. We were not leading in legal technology.
What makes you so excited about the possibilities AI creates, given your background and the kinds of legal problems you’ve been thinking about and working on for so long?
Sateesh Nori: It’s a great question. I think about this all the time: What am I doing here? How did I get here?
One way to answer is to go back to the beginning. The reason I went to law school was to help people—to find solutions to everyday problems. I enjoyed that challenge. I was also deeply moved by the unfairness of the way our world works. We have laws on the books, but people can’t access them. We have courts, but people don’t have lawyers to help them. We have protections, but people are trapped and taken advantage of.
Starting from that point, the law is just a vehicle. I happen to be a lawyer. I could have been a schoolteacher, a plumber, a bridge builder—anything. Law is what I ended up doing because I liked public speaking, I was on the debate team, and it seemed like a natural path.
Working at Legal Aid was really rewarding. I worked with so many people, learned about their lives, and tried to better understand my city and the world I lived in. But eventually I realized: we’re not doing a great job. We turn away people every single day. We have a narrow set of parameters for who we can help. We have grants, funders, and all kinds of limitations. This is not the best way to deliver help.
I was always attracted to technology. In the early days it was a PalmPilot or being able to have email in court 15 years ago. I loved being able to get information I needed, when I needed it.
During the pandemic, I started thinking: there has to be more to life than being a lawyer in a big legal aid organization. How else can I approach this problem? How can I think differently about access to justice?
That’s when I came across AI tools and RAG bots. When I saw a demo of a RAG bot by our good friend Sam Flynn, I felt like I’d been hit by lightning. I thought, this is it.
There’s no reason for people like me to struggle to interpret and transmit complicated rules and laws to ordinary people in the same way we always have. It’s no longer necessary to rely solely on individual lawyers to do that translation. We can do much more with less, and we can free up resources to focus on harder, higher-touch work.
That’s what brought me here. I’m still trying to solve the same problem, just with different tools. The challenge now is convincing more people that this is a viable, effective, efficient path forward. We need to stop sitting on our hands and take real steps together. It’s doable.
Bridget McCormack: It suddenly makes possible a new way to scale the kind of help you were giving individual clients. It enables a new model—that’s how I think about it.
Before we move on, maybe all our listeners know what a RAG bot is. I’ve built a lot of RAG bots with Sam and without Sam. I know you have as well. But tell the listeners what a RAG bot is.
Sateesh Nori: Maybe I’ll turn that over to Tom. He’s better able to explain the tech than I am.
Tom Martin: RAG stands for Retrieval-Augmented Generation. It’s a term they came up with, but the idea is simple: you connect a chat interface—a chatbot—to a specific source of knowledge. That might be a set of documents you have, or your own experience that you’ve written down.
The chatbot can then refer to that information and intelligently answer questions based on it.
Bridget McCormack: That was well done in plain English. I like it.
Jen Leonard: I like “source of intelligence.” I’m going to borrow that, Tom, with your permission.
Sateesh, let me ask a follow-up, going back to your origin story. It resonates with me and with why so many students and lawyers say they became lawyers. It’s kind of a two-part question.
First, how surprising was it to you to discover how poorly designed the legal system is—how inaccessible it is—compared to the popular image you might have had going in?
Second, to this day I find lawyers are surprised when you explain just how poorly designed it is.
Do you see this moment as an opportunity to help shift hearts and minds around how we think about solving problems for the public more broadly across the profession?
I’m continually surprised by lawyers’ surprise when you raise how badly designed it all is.
Sateesh Nori: I completely get that.
First, I think most lawyers are shocked when they enter the profession. They’re shocked by the type of work they’re doing, by how different it is from what was promised in law school or before. They’re shocked by how unrealistic the lawyer images in film, TV, and books actually are. We all think we’re going to be Atticus Finch.
In reality, most of us are more like paper pushers in giant office buildings—scriveners. There’s been a kind of bait and switch in legal education for a long time. And this is a moment of reckoning.
Remember all the things you thought you’d be doing, and then compare that with what you actually do. Now even the work you actually do is often something AI can handle much more easily.
People sometimes say, “You’re turning lawyers into robots,” or, “You’re telling people who can’t afford ‘real’ lawyers that they’re getting robot lawyers.” To that I say: we’re already robots. What are we doing now that’s so uniquely human? We’re looking at screens, typing emails, doing legal research. We hardly interact with real human beings. Very few lawyers go to court, try cases, or do live negotiations. Very few are in the town square debating public policy or shaping legislation.
So yes, it’s a moment of reckoning. But if lawyers did some soul-searching, I think many would admit: being a lawyer isn’t that great right now.
What’s the real harm in freeing yourself from the dry drudgery of legal work and getting back out into communities—being a small-town lawyer, hanging up a shingle, solving people’s everyday problems—and being able to make a living doing that with the proper tools and technology?
That’s what excites me: framing this as a positive, liberating moment for lawyers, not the destruction of the profession.
Bridget McCormack: The double standard in how we evaluate human lawyering versus AI lawyering really gets under my skin. I want to ask Tom about your work at LawDroid to reduce hallucinations, but let me say this first.
People practicing law—and judges—make mistakes all the time. As someone who reviewed judicial decisions for about 10 years, I’m here to tell you: judges make mistakes. Our whole system is designed around that assumption. We literally have a tiered system of review because judges get things wrong. It’s a design feature.
But when an AI hallucinates, people say, “That’s it. It can’t possibly do what we lawyers do.”
That’s a good place to turn to you, Tom. I know you’ve been building a lot at LawDroid, including tools that reduce or eliminate hallucinations, and this new project, Law Answers AI, which can provide jurisdiction-specific answers to user questions. That feels like a potential game changer for a broken civil justice market.
Which project has excited you the most so far, and what can you tell us about what you’re thinking of building in the future?
Tom Martin: There’s a lot. It’s kind of an open field right now to build scalable solutions that can help ordinary Americans with legal issues. This is finally possible in a way it’s never been possible before.
One project I’ve really enjoyed is Depositron, which I worked on with Sateesh. It puts the ability to use technology as leverage in the hands of ordinary people so they can get what they need. In this case, it’s about getting their security deposit back when a landlord is being stingy or difficult.
People don’t always have the right words to ask for their deposit or demand it back. We give them those words. We give them a structured, legally grounded way to ask.
On the other side, Law Answers AI is another project I’m excited about. People have all kinds of questions about legal issues. They try to get a lawyer on the phone, and it’s busy or it’s after hours. Even if they reach someone, it can cost hundreds of dollars. Law Answers makes that kind of legal information available nationwide, and the answers are really good. I’ve seen a lot of lawyers on there asking questions, and the system holds its own.
When I founded LawDroid almost 10 years ago, part of the core mission was to “promote justice everywhere.” Taking inspiration from Dr. Martin Luther King Jr.—“Injustice anywhere is a threat to justice everywhere”—I thought, we should promote justice everywhere. That’s what we’ve been trying to do.
Jen Leonard: Tom, your Law Answers AI is jurisdiction-aware. How do you make that happen for people asking questions? How does the AI become jurisdiction-aware?
Tom Martin: We enforce it pretty strictly. When someone asks a question, the system requires them to define what state the question pertains to. They select it from a dropdown menu—California, Washington, Florida, whatever. There’s no guesswork.
The system then answers based on that jurisdiction and only that jurisdiction. That’s different from, say, Perplexity. If you go to Perplexity and ask a question, it might guess based on your IP address. That can be scary. Maybe the problem actually happened in Florida, but your IP address is in Vancouver, so it gives you Vancouver law. That’s not helpful and could be dangerous. Our approach avoids that.
Jen Leonard: Maybe we can talk about how the two of you connected. You mentioned Depositron, and we’d love to learn more about it. Did you two meet on LinkedIn?
Sateesh Nori: We probably met at one of many conferences. There’s that joke: if it’s Tuesday, we’re probably in Cleveland. We kept seeing each other and I developed a lot of respect for Tom and the amount of work he’s able to do as one person—the podcast, the Substack, the products he’s launching, and the legal aid organizations he’s partnering with.
I thought he's someone I should be following. At a Stanford conference, Tom said to me, “Do you have any ideas I can help you with?” I thought, what a great question. It shows who Tom is—someone who’s actively looking for ways to help people and wants to collaborate.
We talked about this idea I’d had for a long time about security deposits. In New York City, where I live, about 200,000 people move every year. The average security deposit is about $2,500. I’m not great at math, but I did it ahead of time: 200,000 times $2,500 is $500 million.
That’s the amount of money landlords take from tenants every year.
Now let’s give landlords the benefit of the doubt—which, as a tenant-defense lawyer, I rarely do. Say 80 percent of the time they give the money back or keep it for legitimate reasons, like damage to the apartment. That still leaves 20 percent of the time when they’re improperly keeping it. What’s 20 percent of $500 million? It’s a $100 million problem.
There were no solutions for that in New York City. And we realized this problem exists everywhere. Anywhere there’s a landlord and tenant and a security deposit, this issue likely exists.
As Tom mentioned, people don’t have a way to put their demand into legal language. They don’t know the law or how to cite it. They don’t know how to leverage the penalties built into the law. And they often don’t realize that photographs can be powerful evidence of an apartment’s condition. If they could transmit those photos, a landlord might decide to just give the money back, knowing the fight ahead isn’t worth it.
That’s Depositron. Incredibly, Tom built a prototype in weeks. We tested it and launched it within months. We were able to plant a flag and say: this is possible. You don’t need the Ford Foundation to back you. You don’t need a giant legal aid nonprofit to host you. You can identify a problem, build something cheaply, and put it out into the world.
Today, Depositron has helped thousands of people in New York City. We’re really excited—and Tom can talk more about this—about launching Depositron in other cities and states across the country.
Bridget McCormack: I’m curious about that, Tom. Are you building it for other cities? Courts probably have specific rules, and there is a lot of jurisdiction-specific nuance here. And I’d love to know more about usage: you’ve already helped thousands of people in New York. Is there data on how many have used it and succeeded in getting some relief?
Is that data something you’re publishing and sharing? It feels like the more good news stories we have, the more inspiration others will have to start building these tools. It sounds like part of your mission is inspiring others to build tools that help regular folks. So tell me a bit about where Depositron is going.
Tom Martin: Absolutely. With LawDroid over the past nine years, I’ve had the pleasure of working with many legal aid organizations, courts, and state bar associations nationwide.
Part of why I wanted to work on Depositron with Sateesh was frustration with waiting—waiting for funding, for grants, for the stars to align. You don’t always need to wait to accomplish something a lot of people need. That’s been a source of frustration for many legal aid organizations I work with: there’s a brake on helping people that doesn’t need to be there.
Depositron was about doing this ourselves—self-motivated, without asking permission. You don’t have to wait for everything to fall into place. That’s at the core of Depositron: making it happen.
We are definitely working to open it up nationwide. A lot of people from different places have reached out to us since Depositron launched in New York. We’ll be launching in Illinois, Florida, and California—that’ll be the start.
Beyond the technical side of AI, what Depositron really does is give people a voice and power they otherwise wouldn’t have.
Jen Leonard: Tom, how do you get the word out to people who need this help? You’re doing exactly what people need—getting solutions out quickly—but awareness is always a challenge. People need to know these tools exist.
Tom Martin: That’s the million-dollar problem. If you build a better mousetrap but people don’t know about it, they won’t use it.
I can’t claim we’ve discovered a secret growth hack that no one else knows. As for the rollout, we were fortunate to get a lot of news coverage when we launched, and that helped spread the word. We’ve had close to 700 interactions with Depositron so far. In terms of tracking long-term outcomes—how many people actually recovered their deposits—we don’t have full longitudinal tracking yet, but that’s in the works. We want to be able to see and show that we’re actually affecting change.
As we expand to new states, we’ll bring state partners into the loop and ask them to help amplify what we’re doing.
Sateesh Nori: One frustrating piece with a project like this is the people who are against it, or at least not supportive—especially institutions that should be allies.
Take the courts. Why aren’t the courts in New York City linking to Depositron? In small claims court, which is where people go to fight for security deposits, something like 30 to 40 percent of cases are about that issue. Courts would hugely benefit from keeping those cases out of court. It’s a drain on resources.
Then you have legal aid organizations. Why aren’t they supporting direct-to-consumer tools for problems they can’t resolve anyway? They don’t have the capacity. Instead, they’re wringing their hands about unauthorized practice of law and worrying, “Does it really work? What are we telling people?”
They’ll say, “There’s this one nuanced, borderline case where the tool doesn’t work perfectly, so we’re going to reject the whole thing.”
You also have government partners and foundations. There are misconceptions about AI—about what kind of justice system we’ll create if we allow tools like this.
Even if we had a million-dollar marketing budget, the real challenge would still be changing minds in this space. These stakeholders have the power to say, “Yes, use these tools, they work, and they help all of us.” They help people who need help. They help providers who can’t do this work. They help courts that are clogged with these cases.
Tom Martin: Just one last thing I want to add: we can work together.
Another block for people is the idea of competition. They think, “That person is a competitor; they have their own company.” Sateesh works at Just Tech, I run LawDroid, Sam has his own company. That could be seen as a barrier.
But it doesn’t have to be. We each have knowledge to share, and we can bring it to a joint project. Depositron is a great example. We didn’t let those mental barriers stop us. We decided we’re not competing against each other—we’re competing against the world to provide justice.
Sateesh Nori: I love that.
Bridget McCormack: Jen and I do a lot of presenting to legal audiences who are trying to understand the impact of a general-purpose technology—generative AI—on the practice of law, the business of law, and how disputes are resolved. We follow a lot of data on how lawyers are using it and how those numbers are changing.
I was just working on a slide for a talk. I can’t remember if the data was from Law360, but the number of lawyers who are optimistic about the value AI will bring to their practices is very high among frequent users. Frequent users are very optimistic. Among lawyers who haven’t used it at all, the outlook is very pessimistic.
We’re familiar with the reactions you mentioned. I want to dial in on one critique in particular: that AI solutions—especially direct-to-consumer tools for civil legal needs—are “second-tier” justice. The idea is that because these tools are imperfect or may make mistakes, they cement inequality.
We hear the same critique about “justice workers” who aren’t lawyers. I keep thinking: as opposed to what?
If you’re hungry, why are we criticizing the food someone offers you from afar? Why are we answering the question instead of the people who actually need help? Maybe we should ask the 92 percent of Americans who can’t afford a lawyer whether they’d like a free or low-cost solution.
Are you hearing that critique? And what’s your response? I’ll start with you, Sateesh, though I’m sure you both have thoughts.
Sateesh Nori: I hear it all the time, especially from legal aid lawyers. When you talk about Upsolve, for example, there was an appeals court decision after a long delay, and part of the opposition came from legal aid in New York. That’s troubling.
We have this arrogance about what we do and how special and perfect we are. None of us is perfect. I make many mistakes in my work. There are things AI could do more reliably than I can.
My Spanish, for example, is terrible. If I gave housing advice to a tenant in Spanish, that person might get evicted. So who’s actually providing better service there—me, or an AI with high-quality Spanish language capabilities?
The other point is that no ordinary person wakes up hoping to meet a lawyer. What they say is, “I hope this problem I have—this life problem—has a solution.” They don’t necessarily label it a legal problem. They hope there’s a solution, and maybe that involves a lawyer, but they know lawyers are expensive and hard to access.
We talk about “access to justice,” not “access to lawyers.” What do we mean by justice? We mean solutions—solutions within the framework of our system. That doesn’t necessarily require a lawyer. People don’t care whether a lawyer is involved if their problem gets solved.
So these objections often come from arrogance, ignorance, and a sense of self-importance. We’re a self-selected group. Many lawyers think they’re smarter than everyone else and can quickly understand any issue. That mindset can be a barrier to solving everyday problems for everyday people.
Bridget McCormack: I say the same thing to judges. People don’t want judges either—no offense. They want their problem solved.
In Richard Susskind’s metaphor, they want the hole in the wall. They don’t want the drill.
They’re just paying for the drill because that’s how they get the hole.
Tom, are you hearing similar feedback? How do you respond?
Tom Martin: You anticipated one of my examples. I have two guiding lights on this.
One is Professor Cat Moon, who focuses on the client—moving from a lawyer-centric model to a client-centric model. It’s about centering the person with the problem, because that’s who we are trying to help. I also went to law school, like Sateesh, to help people.
The second is, as you mentioned, Professor Richard Susskind. People want to hang a picture so they can see their family and feel connected and loved. They don’t really care about what hammer they use. It’s the same in law: they don’t want a judge for the judge’s sake. They want their problem resolved.
I hear those “second-tier justice” critiques far too often. I try to drown them out or put on my headphones and listen to Radiohead instead, because they’re not productive. They don’t help anyone, and they certainly don’t help the people who need help.
I always tell my students: look at the before picture and the after picture. Don’t just look at the after picture.
The before picture is: nothing. No help. “My landlord is keeping my money; what do I do?” The after picture is: you have some voice and some power to demand what’s yours. I’d rather focus on those solutions.
Jen Leonard: Tom, as you were talking about client-centricity, I was thinking of my colleague Mike Avery, an architect who teaches at the School of Design at Penn. He’s one of the people who taught me design thinking. He came over to the law school and had our students walk around and do a field observation.
He asked what they thought of all the pictures on the walls, which are all lawyers and judges. They said, “It’s aspirational. I want to be one of these people one day.” And he said to me, “It’s strange that none of these pictures are of the public—the people you’re serving. It’s all lawyers.”
From day one, everything is about us. So it’s not surprising that our reaction to these tools is, “Nobody could ever replace us.” It’s so deeply ingrained in the culture that we’re the point of all this.
That’s why it gives me hope when we talk to students who are excited about these tools, and when we talk to people like you, Tom, who are teaching students to think about new kinds of solutions.
As we get near the end of our time—and you’re both doing such incredible work—maybe we can close with a question for each of you. Now that we have this new technology, what would your moonshot project be, if you had unlimited resources and could combine legal, AI, and access to justice? What problem would you solve?
Tom, let’s start with you.
Tom Martin: I’m working on it now, thankfully—with Sateesh and with the Law Answers project. It’s about broadening and scaling what we’ve already started. We want to make this kind of help available to as many people as possible.
If you look at the line of legal service delivery, at the very beginning—the top of the funnel—the question is: Do I have a legal problem, and what does it mean? That’s where Law Answers comes in.
The second step is: I know I have a problem. How do I get help? That involves identifying and triaging issues and finding the best way to solve them.
The last part is the actual legal service delivery: “Okay, you need a demand letter.” We can generate that for you, help you send it, maybe even track the response and give pointers for court if you need that.
That entire horizontal—from “Do I have a problem?” to resolution—is something many people are now trying to solve for. That’s great for ordinary people who need help.
Jen Leonard: Sateesh, closing thoughts—what would your moonshot project be?
Sateesh Nori: This is controversial given my background, but I want to make legal aid obsolete.
We exist as legal aid because there’s a gap. That gap exists because people can’t access the laws that are meant to serve them. We should work to put ourselves out of business, not grow bigger, more bureaucratic, and more opaque.
Ideas like Law Answers—the ability to search for an answer to any legal problem, from any perspective, in any language, at any time, from your phone—that’s how we liberate access to the law. That’s how we democratize the law itself.
If we reach that point, I’ll gladly put away my shingle and find a new line of work. We don’t exist because we should exist. We exist because the system has failed to reach people. More of us need to think that way: why do we exist, and how do we put ourselves out of work by reaching people directly where they are?
If we do that, everyone is better off. That’s my moonshot: to make this type of work obsolete.
Bridget McCormack: That’s exciting, and I’m so glad both of you are already working on that future. We called it a moonshot, but you’re both actively trying to get us there. We’re really glad you are, and we’re grateful you joined us today.
Jen Leonard: This was a fantastic conversation. Thank you both for joining us, and thank you to everyone listening. We look forward to having you with us for the next conversation on AI and the Future of Law. Until then, be well.