AI in LA Courts: David Slayton on Access to Justice

How do you modernize a court that spans 36 courthouses and 1.3 million filings a year? David Slayton, CEO of the Los Angeles County Superior Court, joins Jen Leonard and Bridget Mary McCormack to share a practical playbook for building AI-ready courts. He explains why L.A. shifted its mission from “efficient” to “effective,” how Court Help uses curated sources and feedback loops to support litigants, and what it means to pilot, fail fast, and scale responsibly.

We also explore the risks of moving too slowly (capacity crises and backlogs) and too quickly (erosion of public trust), along with future models like predictive analytics and optional “express lanes” for resolution. Leaders in law, policy, and operations will leave with concrete ideas for deploying AI where it truly helps—while keeping people at the center.

Key Takeaways

  • Effective > Efficient: Redesign measures of success for justice, not speed.
  • Guardrails First: Limit AI to vetted sources and collect user feedback.
  • Triage with AI: Automate the routine so people can focus on the complex.
  • Trust is the Moat: Move fast enough to serve, slow enough to explain.
  • Pilot, Learn, Scale: Start small, measure, and expand what works.

Final Thoughts

AI doesn’t replace judgment—it buys time and clarity for it. Courts that pair disciplined guardrails with human touch can widen access to justice without sacrificing legitimacy. If you work in law or public service, this episode offers a blueprint to act now—responsibly—and meet rising demand with tools that serve people first.

Transcript

Introduction

Jen Leonard: Hi everyone, and welcome back to the AI and the Future of Law podcast. I'm your co-host, Jen Leonard, founder of Creative Lawyers, joined as always by the fantastic Bridget McCormack, president and CEO of the American Arbitration Association. 

On this podcast, we explore all the fascinating and fast-moving dynamics related to artificial intelligence and what it means for our profession, including our court system. And we are delighted today to be joined by a leader in one of the country's largest and busiest court systems. So I will turn it over to Bridget to introduce today's guest. Bridget.

Bridget McCormack: Thanks, Jen. Great to see you. I am really excited about today's conversation because my friend and one of my favorite people in the whole world is our guest. David Slayton is the CEO for the Superior Court of L.A. County, the largest court system in the country. I met David when he was the state court administrator for the state of Texas, and through the pandemic, he and I co-led a number of technology and innovation efforts. David has 26 years of experience in court administration at every level and is viewed by everybody who's ever worked for a court as the leading thought leader and innovator in court administration, court operations, and innovation and access to justice. He's also incredibly smart and fun to talk to, and an absolutely lovely person. David, welcome and thanks so much for joining us today.

David Slayton: Thanks, Bridget. Thanks, Jen. I'm glad my mom found your email so she could send you those comments. So that's nice.

Bridget McCormack: We start our podcast every week with what we call our AI Aha! moments. These are ways in which we or our guests are using generative AI either in our work lives or in our out-of-work lives. Just fun things we're doing with it, or things we're finding that are kind of fun to share. And so, we'd like to have our guests share theirs. And so I'll start with that. David, what are you doing with this technology lately in your life?

AI Aha! Moments

David Slayton: Yeah, as I think about this, two things come to mind. The first is, I recently took a trip to Scotland. It was a two-week trip and my family was going with me, and I bought the travel guide and I was thinking, why am I doing all this work? And so I literally asked ChatGPT to help me plan my vacation. I did some of it myself, and then I uploaded the document, and ChatGPT said, “Is this reasonable to do?” because I generally over-plan, and my family's like, “Seriously?” And so ChatGPT gave me good tips. I did that.

And then the other thing I'll say, professionally, is we launched a new website at the court—LACourt.gov—on July 1st, and we have a generative AI tool right on the home screen to help people. It's called “Court Help,” and I'm now using that to find anything on our website, because rather than clicking around and trying to find it, it's just so much faster to say, “Hey, take me to the strategic plan,” and it takes me right to it. So those are two things that I'm using it for right now.

Bridget McCormack: That's great. I hadn't heard that you all launched that! I've been following the different courts that are figuring out ways to use it, especially to help people who have to interact with courts—especially people who interact with courts without lawyers. (Although, as a lawyer, I often find court websites to be really, really hard to navigate.) So that's a pretty cool offering. I'm sure people appreciate it.

So let me start there, though. Tell everybody a little bit more about the L.A. court system. I remember when you took the job and told me just how complex it was, and I don't know if everybody understands that. Can you describe a little bit about the volume of cases the court sees, the people you serve, how many of those are navigating your courtrooms without lawyers, the number of judges—paint the picture of what the L.A. County court system is, so we understand what you're up to day to day.

David Slayton: Yeah. So, as you point out, it's the largest court system in the country. You know, we have 582 judges and commissioners (judicial officers, as we call them) at the court, spread across over 4,700 square miles. They sit in 36 courthouses across the county. And we have about 5,000 employees who work for the court, and a $1.1 billion budget. (And I did say “billion” with a B.) We obviously serve a lot of people—there are about 1.3 million cases filed per year in the court across our litigation types. We handle everything from small claims and traffic tickets to the most complex civil litigation and even death penalty cases, and everything in between.

So it's a consolidated trial court—we handle everything. And we have a tremendous number of self-represented litigants, people representing themselves (or at least attempting to represent themselves). In our family context, depending on the year, somewhere between 70% and 80% of the people navigating the family system either start off or end up the case representing themselves. (Usually the percentage grows over the life of a case.) In the civil context there are more lawyers involved, but just to give you an example, we recently did some work with Stanford on our eviction cases (what we call unlawful detainer cases), and only 14% of defendants in the last year in our eviction cases were represented by a lawyer. That’s compared to, I think, 94% of landlords being represented by a lawyer—only 14% of tenants had a lawyer. And that gap continues to grow; we looked back over several years and we see that. So, obviously lots of people try to do this by themselves, which is really relevant to the topic we're going to talk about today.

Trust and Transparency in AI-Enabled Courts 

Jen Leonard: So, David, let's dive right into the topic. The complexity of what you've just described is staggering in its scope, and the number of people trying to navigate the system is appalling—and really ripe for new solutions. Courts are often slow-moving and find it difficult to adapt to new technologies. But your plan in L.A. for 2025 to 2028 explicitly highlights AI (and generative AI) as being important to the strategy of the courts. Could you tell us about the vision of that plan and why it's so important to you and the leadership of the courts—and those you work with—to harness the power of AI right now?

David Slayton: Yeah. One of the things that's stuck with me for, I guess, now 11 years (Chief Justice McCormack has heard me say this before—I keep thinking I’m going to get in trouble and get a call from Chief Justice John Roberts when I say this publicly, but I'll just keep saying it until he gives me a call). In 2014, in his end-of-the-year address that the Chief Justice always does, Chief Justice Roberts said, “The courts will often choose to be late to the harvest of ingenuity.” He was talking about the slow pace at which the Supreme Court had adopted electronic filing. And I think he's right on that—at least, we are oftentimes late to the harvest of ingenuity. But the word “choose” in there sort of offends me. I think it's a challenge to all of us to really think about: why would we choose to be late to the harvest of ingenuity? Sometimes it’s true.

So, one of the things that really, when we think about vision for the L.A. Superior Court—we started talking (this was about a 6- or 8-month process with our judicial officers, very engaged with 21 judges from across our court, different disciplines, different lengths of time on the court). We started looking at the basics: What's our vision? What's our mission? We changed our mission statement—I won't recite the whole thing, but everyone can look it up. One of the things that we talked about was in our old mission statement, we had the word “efficient”—that we would be efficient. There was a very intentional shift: that word does not exist in our mission statement anymore. It's now that we will be effective. And I think that's something that's really important to me and important to our court: let's not just do things to do it fast; let's do it well.

When we looked at our vision statement—I'll read you our whole vision statement: “Our court will be accessible to all, trusted by all, and just for all.” It's a pretty strong statement that we want to be accessible to all. And I just gave you earlier some of the real barriers to access: it's a big county; traffic is an issue; technology is an issue for some people; language barriers—we can go on and on with all the barriers. And so trying to think of ways to make it truly accessible for all, and then, you know, trusted by all (we’re going to talk today, I think, a little bit about how we make sure people trust us—that's another big one: if they can't access it, they can't trust us either). And then the last piece: just for all—we want to be delivering justice as best we can, and thinking about new and innovative ways to do that and meet the needs of our litigants is important.

So, we really went through that and thought about: what does that look like? Well, we want to be people-focused. We want to make sure that the first thing we think about when we're doing things at the court—new technology, new innovation, new processes—is that we start from the focus of the people (primarily the court users, but also our internal court staff and judicial officers). But really the first focus is the folks outside. We want to be transformative; we want to innovate. We want to use data to inform our decisions. And ultimately, all of that led us to this idea of really focusing on enhancing accessibility and transforming the user experience—thinking innovatively and looking forward to how we might be able to do that.

As you look back over the couple of centuries (almost 250 years now) that we've been doing this in this country, I think many times people have said, “Thomas Jefferson could walk in and practice law today.” I don't know that that's something we should be proud of—especially now that most people don't have a Thomas Jefferson standing by their side. Really thinking about the best way to deliver justice and meet the needs of people is the key reason why, in our strategic plan, we're focused on using any sort of innovative technology (AI obviously being one of those) to try to help people as best we can.

I gave the example of Court Help. You can go on Court Help and all the content is based on what's on our website or on the Judicial Council of California's website—but of course, that information is hard to find. So you can go on Court Help right now and say, “How do I file a response or an answer to an eviction case?” and it gives you step-by-step instructions and links to the forms. I mean, that's what we should be doing, and AI has given us the opportunity to do that.

Bridget McCormack: Everything you said resonates with me as somebody who led a court system for a while. I have a slide of a surgical suite from 1890 and a modern surgical suite, and then a courtroom (the Iron County Courthouse in Michigan) from 1890 and today. And, you know, it looks exactly the same. A lawyer dropped from 1890 really could navigate today's courtrooms—and I agree with you, I'm not sure that's something we should be proud of. It seems to me, since we've gone through four industrial revolutions, we might think about how we need to change how we do what we do.

But you've been incredibly successful at that at lots of different levels—and also in helping others be successful at that in some of your roles. In addition to the barriers that are cultural, legal, and regulatory to transformation, there's just a big change-management hill to climb in providing court services to a largely unrepresented user base. When lawyers and judges have done things one way for a long time, that can be a really hard part of the equation. What have you learned about change management in our profession that values how we've always done things? (We’re literally governed, legally, by decisions we made before—that’s what precedent is ) It's a very hard change-management atmosphere. How have you succeeded? I mean, I don’t see many other examples. What's your change-management secret playbook?

David Slayton: Yeah. I think, as you point out, the always-looking-back (the focus on precedent) is a challenge for us. And people don't like change. No one wakes up in the morning and says, (Well, maybe a few of us do). “I want to change something today.” So I think you have to start with making the case for change—why? Change for change’s sake is not going to be successful. So diagnosing problems is important—being able to describe them clearly. I think a lot of times we say, “Let's make this change,” and no one understands why we're doing it.

So, to me, number one is really trying to lay that out clearly—where, in essence, no one can really say no, because you've made the case and, unless they just want to say no for the sake of it, they're going to say, “Oh yeah, this makes sense.” That's the first thing.

The second thing is—and this is a real problem for us that we have to keep working on—we do not allow for failure. Obviously, when we're deciding cases we don't want to be making mistakes (there is an appellate process for that, if we do), but I think that sometimes spills over into our administrative functions: “Let's not make any mistakes.” And we know what happens with that: no one will ever do anything, because they're terrified.

So making sure people in our organizations understand that we can try things. Obviously we want to fail fast, and we don't want to fail big. We want to manage failure, but we are going to fail. I've failed many times in my career with things, but the goal is to fail fast. Then the biggest question is: How do we get up? What do we do next? How do we learn from that failure and move on? I think that's a real key to success. Some of the simple things: pilot it. Give it a shot on a smaller scale and see if it works—see what doesn't work—and figure out how to make it happen. I think all of those things are really key.

I wrote an article with Chief Justice Nathan Hecht a few years ago about change and how to deal with that. Sometimes it feels like you're just staring down the gauntlet, and one of things we wrote about is that sometimes you just have to do it. You have to make a decision and move. You have to say, “We're going to do this change, because it's the right thing to do,” and with the best information you have, make that decision. We talk in that article about how sometimes you don't have all the information—actually, most of the time you don't have all the information. So you just have to make the decision. I think sometimes the fear of doing anything stops us from doing anything. So really, just having the courage to step up and say, “This is an important change; we're going to try it. We might not be successful, but staying with what we're doing and have been doing is even less successful.”

So I think those are some things that I really try to ground myself in and share with the people around me to try to move things forward.

Bridget McCormack: I feel like you're peering into my soul! I just gave a keynote about change management, and I said, “Action leads to information.” My view is you sometimes have to just do things to learn. And you might learn that that thing doesn't make any sense to do anymore, but action leads to information. On my team, I often say, “Let's run the experiment.” Somebody has an idea and somebody else says, “It'll never work,” and my answer is always, “Let's run the experiment and find out.” How do you convince others of that?

I now work with a lot of people who are not lawyers, and that's refreshing, I have to say. How do you convince others—like the 500-plus judicial officers and the lawyers who work with you—to go along?

David Slayton: You’ve got to explain the why. People have to grasp the why; they have to understand why. If we try to do it before that, it's not going to work. And then—I mean, this won’t help people who are just starting out doing change—but you build a track record, right? You've been successful maybe in little things, maybe in big things, and you gain trust. So I think, again, that requires action—to your point earlier. 

Let’s say you're new at this (or a new court administrator, or new in this function). You have to do something to build trust. Then when you're successful—or you fail, but you're able to pivot and make it successful—then you get some trust. And then you gain a little bit more, and a little more. And then ultimately you say, “We want to do this giant thing, and here's why,” and people say, “Sign me up, let's go!” It's interesting to watch. I've done, in my career, some crazy things that I look back on and think, I'm so excited to have been a part of those things. And I think that helps me when I say, “We're going to do this thing,” and people come along.

Now, if I had been new in my career, would I have been able to convince people? I don't know. But I think, again, it goes back to that: if you're new at this, you have to start. Otherwise, you're never going to gain that credibility that's so important whenever you're talking about change.

How L.A. Courts Use AI to Serve Self-Represented Litigants

Jen Leonard: David, talking as you are about the high percentage of self-represented litigants trying to navigate the system and the potential AI has to start putting power into people's hands to self-navigate (or navigate with tools that the court creates for them), there has been a lot of conversation about both the power that these tools create and the risks involved in democratizing access to the tools. We all know on this Zoom that lawyers have always had control of designing the systems, even though the outcome is that most people navigate the systems without lawyers at their side (as you said). So how are you thinking through creating frameworks for harnessing the power of AI while minimizing the risks to people who are using AI to self-navigate the systems?

David Slayton: Well, let's start with one thing which I keep saying (and I know Bridget and I had lots of conversations about this about five years ago): our system that exists today is not without risk, and it causes many harms. You know, when we were going through the pandemic, I remember Bridget and I having conversations with people who would ask, “What if this happens... because we’re doing something remote?” And we’d respond, “Hang on—that happens today in the physical world. We're not creating a utopia here.” We have to understand that there are risks and harms that come with the system we've had. A system without AI—a system without new technology—has failures and risks as well, and they are significant. So I just want to set that aside for a second.

That doesn't mean we shouldn't be doing everything we can to reduce those risks and harms. And we know that AI can introduce risks if we're not careful. So, I'll give you an example of something we're looking at. I mentioned “Court Help” on our website. We did a couple things. One is we trained that tool to only use content from our website or from the Judicial Council of California. Obviously, we don't want it going out and searching the World Wide Web and providing answers that we don't control. And then we're fine-tuning that tool to tell it how creative to be. Obviously trying to think about the ways that we use AI. I don't want people to be scared of AI—I think courts need to be embracing it more—but at the same time, you have to be careful. You have to think about the way you're using those tools, what content you're allowing it to rely on, and those types of things.

One of the criticisms of these automated tools is the bias and things they can bring in. I think you have to think about: what's the content and the information it's using to provide information to litigants? And I think at the end of the day—go on Court Help and look for yourself—we put a disclaimer on there: “This might not be right.” But guess what? The clerk in the self-help center probably doesn't get it right 100% of the time either. We just don't give those disclaimers. 
But bottom line is: we've got to think of ways and keep getting feedback. Again (I keep bringing this up), on Court Help we have a thumbs-up/thumbs-down. If the answer it gives you is a bad answer—and hopefully people will give us a thumbs-down—then we take that information as a feedback loop and we try to make improvements to the system.
That's the path forward on this. I don't think we should go into it without eyes wide open that there are risks. I just hope we remember that there's a risk to doing nothing as well, which are the ones that have existed for decades, if not centuries.

Jen Leonard: It also sounds to me, David. I've had conversations with friends and colleagues who work as public interest lawyers and legal aid organizations. And that sometimes the question is posed as a binary choice. It's either somebody who is represented by a legal aid attorney (when the stakes are the highest and they're in a trial situation or a hearing in front of a judge) and it's either they're represented by a lawyer or they're represented by a robot. And, why would you want a world in which that's the outcome? And I would not want that world to be the outcome! But to me, there are literally millions of different gradations between that situation and what you're describing, which is making websites and information more usable and more accessible.

So I'm curious about your thoughts on thinking through different ways to select uses that are low-risk—identifying those and rolling out opportunities to test and get feedback from self-represented litigants there—versus the more high-stakes situations where we would want to focus lawyer time and attention (and the promise of AI) to help move lawyers more in that direction. Is that how you think about AI?

David Slayton: Yeah, that's a great description of how I see it. I see AI as a tool, not a replacement—it’s a tool. Think about it from the perspective of a self-represented litigant, and maybe even from a triage perspective. Maybe level one is an AI tool that helps provide them information to move them along the path. And maybe that's enough for X percent (maybe it's not even a large percent). Then there are those who are going to need additional help. We want to use the AI to point them to, “Okay, now you need to go to the self-help center, or you need to call the self-help center, because this tool is not going to help you beyond this point.” The problem is, of course, we don't have unlimited funds and unlimited staff. So, we want to be using our tools—just like with legal aid, we want to be using the resources we have—to help the most people. I think of AI as a tool that's sort of a force multiplier. It gives us more opportunity to help more people. But at the end of the day, the self-help staff, the lawyers, are going to be there to really try to guide people in a way that's more helpful.

You know, it's really interesting. When I started at L.A. Court, we started a project with Stanford Law School. My original question for them was, “Why are all these people standing outside the self-help center every morning? They can do everything that they're doing here online, and coming to downtown L.A. to one of our self-help centers is no easy task—parking, traffic, all the things that go with it.” So my original question/hypothesis for the Stanford team was, “Can't we get these people to appear remotely using the technology we have for them?” They did focus groups and all this research, and they came back to me a couple months later and said, “You know why they're coming in person? Because this is the most important thing happening in their lives, and they're terrified they'll get it wrong. They just want someone to say, ‘Yeah, that's the right form. Yes, that's what you put there in that blank.’”

So sometimes that human touch is so important, especially in situations where people have so much at risk. We can't forget that. Technology is a tool that can help move people along the path and give them information that at least reduces their anxiety and frustration. But we've got to have the people there to assist when those needs rise above what we can provide through technology.

Bridget McCormack:  So interesting. In a way, it's only by starting to put the technology into people's hands (or into your users' lives) that you find what the better use cases are—and use cases that might surprise you, where they want to park and drive and talk to a person. You only know that by taking steps in a particular direction.
I think a very clear picture of your change-management leadership (and this is why you're as successful as you are) is emerging. You flagged this Harvard Business Review article about the CEO of Moody's diving into generative AI, which is a sort of surprising institution to do that. I think I now know why you flagged it, but I'd rather hear from you: what resonated with you about that story, and what can we learn from it?

David Slayton: Yeah. Obviously, Moody's is a century-old institution in the financial industry—very risk-averse and very concerned about its reputation. Super important. Sounds kind of familiar—sounds like the types of institutions we work in. As I was reading this article, I found it fascinating because one of the challenges I think we face when we're talking about court staff and judicial officers is fear of the unknown, fear of AI, and not understanding how this might be a tool that would help us in our daily work but also help those we serve.

One of the things the Moody's CEO basically said was, “We don't have a choice but to embrace this, because if we don't, we're going to be left behind. We're not going to be able to serve our clients, and bottom line is we will not exist.” I think, again, that's a challenge for us. This is not like some courts still not having electronic filing because they chose not to adopt e-filing. I don't think we're in that situation with AI—we don't really have a choice. It's coming to us, and our staff and our judges are using it whether we know it or not. So I think it's a question of: How do we embrace this in a way that's effective?

One of the things I found fascinating in that article is that the Moody's CEO did all that research and said, “We need to make sure everyone in our organization understands AI—has a basic level of understanding of how it can be used. Then we want to give them tools to be able to innovate with AI. We want innovation to be driven from the ground up.” I just found that to be really fascinating. I think if I sit in my office as a CEO and say, “You all need to use AI to do X, Y, and Z,” the resistance is going to be tremendous because people don't understand it—they're scared of it; they don't understand how it might be helpful to them. So the goal here is to flip it upside down. What Moody's did (and what we're looking to do) is sort of turn this upside down and say, “Let's give everybody the tools they need to have a better understanding of it. Let's give them actual tools that they can use safely and effectively in their everyday functions. And then let's let them basically demand from us, ‘Give me AI to solve this problem, and that problem, and this problem.’” That changes the way this looks. I mean, I work in an organization where most of our employees are represented by labor unions; we don't want labor fighting against us saying, “Oh, this is going to eliminate jobs,” or whatever. That's not the message. The message is: this is a tool that can really help employees with the work they're already doing, and we want them to be demanding it from us (versus us trying to push it down on them and them resisting it). I found that Moody's HBR article to be really instructive for me—something I could take and do in my own organization, and we're on the cusp of doing that.

Bridget McCormack: I couldn't agree more. There are so many different things you said. You know, the market will show no loyalty to our traditional ways of working—it won't. It's not going to care. And for a general-purpose technology (Jen and I do a lot of presentations on this), a general-purpose technology is just going to be throughout our lives. It's not like some tech upgrade that you can decide not to put in place, right? You can say, “We, the court system, are not going to put our data in the cloud—we've decided that's too dangerous. We're not going to do that. So you're going to have to keep filing and we're going to keep everything on-prem.” But AI is going to be throughout our lives. That's the thing with general-purpose technologies, right? You don't fully understand all the ways they're going to impact you, but by giving all of your teams access and encouragement, they'll be the ones to figure out where it can have an impact on the workflows that are the most tedious.

And also the ones that can easily be automated—probably some people in particular jobs will know that better than you will, right? You don't do that job in that court, in that assistant clerk's docket. And then they'll help you figure out where it's going to make a difference in our operations and where it's going to make a difference in our core services.

It's like you're running the playbook that Ethan Mollick calls “leadership, lab, crowd.” You know, it's coming from the top so that it can come from the bottom, and the entire crowd has to be involved. That's how you’ll identify your lab—those folks who will become super-users and help you figure out the roadmap. I feel like you have to update that article you wrote with Chief Justice Hecht—there's a new chapter.

Jen Leonard: It's also making me think, Bridget, of our last podcast with the MIT report on the failed pilots. Our team works with firms on pilots; Bridget's team runs pilots for specific purposes. But, David, what you're describing (and what the Moody's article describes) aligns with what that report finds—which, as we discussed, was not that surprising. To reflect Bridget's comments, everybody is sort of playing around with this in their own individual space and figuring out what it means for them. And the media takeaway of that report was like, “This is all hype and not really something worth investing in,” whereas what I took from it is the contrary: everybody's playing with it in their own space, and it's not yet ready in most cases for highly structured, measured ROI, because everybody just has to learn to build a muscle around it.

And then, as Bridget said, it starts surfacing use cases and really strong ways to put structure around it—where you can organize real, impactful (as you said at the beginning, David) ways to be effective as an organization that serves your community. We have a two-sides-of-the-coin question for you, David. If L.A. courts were to move too slowly on AI, what do you think the biggest risk would be?

Balancing Speed and Legitimacy in Court Innovation

David Slayton: Yeah. One example: we are seeing tremendous increases in unlimited civil case filings (civil case filings where the amount in controversy is over $35,000), mostly driven by personal injury auto cases, some debt cases, and lemon law cases. There's a hypothesis (the National Center for State Courts wrote on this recently, using our data and a couple other states’ data that look very similar to ours) that AI-developed pleadings are driving some of this. It's getting easier for lawyers to use AI to file documents, and so new cases are coming in fast. Quite frankly, we're not getting additional resources to deal with those. So how do we deal with that? It’s like: either we choose to use AI to help us process all these documents that are coming in and deal with the cases, or we say we're going to keep doing it our old-fashioned way because we're not embracing AI—and we'll just get run over. That's my feeling.

You know, there's not a court in this country—I keep asking every time I speak to courts, “Are you overfunded? Do you have too much staff? Do you have plenty of staff to cover all the work?” The answer is no. No one has enough staff—enough judicial officers, really. So at the end of the day, I think embracing AI allows us to deal with those things we've struggled with for decades, which is: we just don't have enough staff to do it, and to do it well, and to really serve the public as well as we can. Part of the frustration from litigants is that the courts are too slow, too costly, and too unpredictable. Well, AI helps us with some of that—certainly with the too slow piece, because we can use it to help expedite some things, therefore maybe reducing the cost. So all of a sudden, we begin to see benefits to the litigants. And if we choose not to embrace that, then again I think we lose trust from the public—they lose confidence in us. And, quite frankly, they walk away. We can't afford that. Our democracy needs a strong and effective judicial system, and this is a tool that helps us get there.

Jen Leonard: And on the flip side, if the court system moves too quickly on AI, what's the biggest risk?

David Slayton: Yeah. We talked a little bit about the risks earlier. If we move too quickly, then we again lose trust. Alexander Hamilton said “The judiciary has neither the power of the purse nor the power of the sword—only the people's trust and confidence in the system.” We can't lose that. So if it gets to a point where people say, “Oh, the judges aren't really making decisions—they're using AI,” or we're using AI in ways that introduce bias that causes people to lose trust, those are huge risks. I think making sure that we are purposefully pursuing AI in ways that help us solve problems (again, not just pursuing AI for its own sake, but to solve problems), while also keeping in mind the risks that come with it and the things we need to do to protect against those issues—that’s the balance. If we move too fast, we ultimately lose trust probably faster than by doing nothing. So it's a delicate balance, I think, between those two things.

Bridget McCormack: It's hard, I think, for any of us to predict the future. When the movie camera was invented, all people used it for wa s recording plays. It took a while to figure out that it could add effects and that you could edit—and an entirely new art form was born. But, you know, we had to first just kind of automate what we did before to open up room to see what else we could build. I think AI is going to be like that for all of us in so many parts of our lives, but certainly across the legal profession.

Even so, can you do any predicting about where you see it headed? Is there a future where parties that have work to be done in the L.A. County court might end up interacting with AI more than with human clerks or even human judges? Would that be okay? Should it be okay? Do you have any early views on that?

David Slayton: This is a difficult question. As you point out, it's hard to predict the future. But I have some thoughts on this. I think the question will be: How comfortable are people in accepting things? One of the conversations we're having is about the time it takes and the cost to get through the court system. I think we could use AI and the data set of cases we have to provide some predictive analytics to people, to let them know what a typical outcome would look like. Maybe people would choose to take that path—sort of like an express lane path. They might choose to do that because they want to just get a resolution; they want to get a judgment—always having the option, I think, to go through the full major “five lanes of traffic” model. But I could see an environment where you might choose to do that. We do that in our lives in other ways.

I know the AAA is working on this in lots of ways. Online dispute resolution is a common thing in other areas. If I have a problem with Amazon, I don't go through some exhaustive process writing them a letter to fix my problem—I just resolve the problem in about 30 seconds. People in those situations often don't know they're dealing with AI, but they are. As that becomes more and more common, and we can get tools that are that effective, I think people might choose to do that more often. 

Again, I think there will be cases where people are going to say, “I don't want to do that—I want to go the old-fashioned way, the way we've done it for centuries.” That's fine. But I love the thought of giving people choices. It's the same way we've done with remote technology: you can come in person or you can appear remote—make your choice. I would really love to give people choices similar to that when it comes to adjudication of their cases. As far as filing documents in clerks’ offices, we're already seeing some of that. Obviously, with electronic filing, really no one's coming into our clerk's office if they're an attorney anymore—they're all electronically filing their documents. But now we're beginning, even in our court, to use AI and robotics processing to process documents. 

We picked 20 of the lowest-hanging-fruit documents in our court to do that—to use AI and robotic processing to process those. Those 20 document types represent 1 million documents filed per year. It's tremendously helpful to make our staff not feel so overwhelmed with the queues of documents coming in and allows them to spend more time on the ones that take more of their brainpower to process. So, in essence, lawyers are interacting more with technology, even though they may not know it, because we're using automated processes to do some of that processing on the back end.

Bridget McCormack: I love the idea of giving parties the opportunity to choose their own adventure. I often describe it the same way (and it's not just the value of the dispute) sometimes disputes where parties have to maintain a relationship after the dispute is over. They might prefer a human-led process, but there will definitely be other, repetitive disputes where they might want a more automated, AI-native process. The ability to give them the tools to “fit the form to the fuss” is a really exciting sort of next step in great service in courts. It's no surprise that you're at the forefront of it.

Jen Leonard: That's an amazing number of documents that you've been able to leverage AI to handle, David. I can see why you're so excited about your work, but as you noted, it's also a tremendous amount of work to undertake the type of change that you are. So what's keeping you most energized as we reach the end of our conversation with you about your work right now?

David Slayton: You know, one of the things that keeps me energized is the opportunity to make our system better. I've worked—as Chief Justice McCormack said at the beginning—in this profession for over 25 years. It's basically all I've ever done as an adult: work in the court system, and I love it. I describe myself on Twitter as a “designated court nerd.” But at the end of the day, we can do so much better. The really exciting thing for me right now is that while technology is not everything, the opportunity to use the tools that are now available to us—whether it be the shift we made to remote technology (we wouldn't have done that without the pandemic, but what an amazing thing on the back end, because it's improved people's ability to access justice)—we have the same opportunity now, in the right situations and employing those tools appropriately, to use AI to drastically expand people's ability to access justice in really meaningful ways. You know, “Equal access to justice under the law” has been inscribed on the Supreme Court building for a long time, but we've not been able to really provide that, I don't think, in ways that are truly meaningful.

And yet now we're able to do that. We're beginning to use technology to say it doesn't matter whether you have a lawyer or not, whether you have money or not, whether you can get to court or not, or whether you have prior knowledge—we're going to give you tools that allow you to access the justice system in ways. You may lose even with all that information, we're not saying we want to make people win or lose; we want to give them information. But we want them to be able to do it in a way that they feel empowered to access the system and feel like they got true access. For me, that's really exciting.

I think the opportunities ahead of us in the next couple of years are tremendous. Technology is getting faster and faster. So as we think about embracing these things and re-looking at our processes to make things work better, to me that's the most exciting thing that keeps me going every day.

Bridget McCormack: The people of L.A. County are lucky to have you there, and I'm going to be really excited to follow everything you do. Thanks so much for joining us today—this was a really fun conversation, like I knew it would be.

David Slayton: Thank you for having me. I loved it.

Jen Leonard: Thank you so much, David, for joining us. It was such an inspiring conversation. And as Bridget said, L.A. County is so lucky to have you. Thanks to everybody out there who joined this conversation. I know I learned so much, and I know everybody out there learned so much as well. We look forward to seeing you next time on the next edition of AI and the Future of Law. Until then, be well.

November 10, 2025

Discover more

Can Judges Use AI? Inside the Pennsylvania Supreme Court’s Interim Policy

How AI Is Changing Legal Education with Dyane O’Leary and Jonah Perlin

Live from MAICON: Building AAA’s First AI Arbitrator