Generative AI and the Courts: Expanding Legal Access or Opening the Floodgates?

 

 

Summary

In this episode of 2030 Vision: AI and the Future of Law, Bridget McCormack and Jen Leonard explore how AI is reshaping legal access, communication, and decision-making. They discuss OpenAI’s Deep Research tool and DeepSeq’s disruptive AI model, which could democratize AI and challenge big tech.

Beyond law, they examine AI’s growing role in healthcare and education, while emphasizing that human expertise remains crucial in legal processes. They also tackle how AI-driven tools can lower legal costs, expand access to justice, and streamline dispute resolution, particularly for low-dollar claims that often go unresolved.

While some fear AI will flood courts with frivolous lawsuits, Bridget and Jen argue its real potential lies in making legal services more accessible. With governments and startups investing in AI-powered legal solutions, the profession faces a turning point—one that demands innovation while ensuring fairness and efficiency.

Key Discussion Topics

  • AI in Law: The Communication Challenge: How legal professionals can better explain AI’s impact and bridge knowledge gaps.
  • The Human Element in Legal Processes: Why procedural fairness and human connection remain central to justice.
  • Generative AI’s Role in Healthcare & Education: AI as a diagnostic assistant and self-directed learning tool.
  • DeepSeq’s Disruption: A new AI player that could democratize development and challenge the US-China AI race.
  • AI as a 24/7 Thought Partner: Using AI for legal strategy, research, and professional development.
  • Expanding Access to Justice: AI-driven platforms are helping individuals file claims and resolve disputes more efficiently.
  • Balancing Innovation & System Integrity: Addressing concerns about frivolous lawsuits while enhancing legitimate legal claims.
  • AI & Government Solutions: How policymakers and public institutions can use AI to modernize legal infrastructure.

Transcript

Jen Leonard: Hi everyone, and welcome to the newest episode of 2030 Vision: AI and the Future of Law. I'm your co-host, Jen Leonard, founder of Creative Lawyers—here as always with the brilliant Bridget McCormack, president and CEO of the American Arbitration Association. On every episode, we try our best to keep up with the rapid pace of change in the land of AI and draw connections for lawyers who are practicing all day, judges who are overseeing their dockets and may not have time to keep pace with AI developments, and think about what those developments mean in the legal profession. Hi, Bridget, it's great to see you.

Bridget McCormack: Hey Jen, it's great to see you too. I liked it in person better, but you know, I'll take this for today.

Jen Leonard: I know, I know—this is subpar. But we will ride again in person very soon, I think. I'm excited about that. And just to preview for people who haven't listened before, we break our show into a few different segments. We both start by sharing ways that we're using this general-purpose technology, so people can get a sense of its breadth and all the different weird things you can do with it.

We used to have a section on definitions related to AI—there's all sorts of weird jargon when you're learning about artificial intelligence—but we thought we might swap that out for a new segment we're calling "What Just Happened?" because so much happens every single time we jump on a recording, and we don't always have time to cover it. So we'll do a very quick survey of some things that just happened, and then we'll dive into the main topic. And today, we're exploring what generative AI means for the expansion of the ability to file suits in courts and have rights adjudicated as a result of this technology. 

AI Aha! Moments: The Horseless Carriage Metaphor & AI Thought Partners

Jen Leonard: With all of that as background, Bridget, I'm so curious to hear about your AI Aha this week.

Bridget McCormack: I don't know if my AI Aha this week will break any new ground for anyone, but it's something I've come back to a number of times. I've been trying to think about how to communicate better with stakeholder audiences—mostly legal audiences, but also people involved in the alternative dispute resolution processes that the AAA administers—about what this technology means and what's coming. I think it's really hard for lots of reasons (as you and I often discuss). People are kind of indifferent about what they understand and how much they're using it. But it's also the case that, as you'll see from our next segment, the technology is moving so quickly that it's really hard for anybody with a busy job to keep up. And if you're not working with it full-time and following it full-time, it feels pretty weird when you just see how it works—and it's hard to trust it.

There are some examples throughout history of significant changes and what it took to get people to trust what became the "new normal." I was reading an essay about the first engineer who developed the first horseless carriage, and he built it in the shape of a horse. So it was a carriage without a horse, shaped like a horse. Of course I went straight to my generative AI friends and asked for drawings of it. I have some great ones that I've shared with you and I think they're amazing. I said, "Show me a picture of a horseless carriage that's shaped like a horse," and it's pretty cool to see. The point was to help people get comfortable with this new mode of transportation, where an actual horse wasn't going to be pulling their carriage anymore. This was shortly after horses had been the norm.

Einstein famously said “the horse is doomed as a means of transportation,” which is so funny. But the horseless carriage gave people an opportunity to get used to—or try on for size—what this new way of travel would feel like. You could even steer the carriage with reins attached to the fake horse's head. So I've been asking myself: what is the horseless, horse-shaped carriage for what we're doing right now? Because that's the metaphor I feel like I need to be thinking about when I communicate with people. We're headed somewhere that's going to be really different. You and I have said many times that it's a much brighter future for so many reasons, and I'm excited about it—but I understand why it's scary. So, what are the ways I can bring audiences, stakeholders, partners, people whose opinions I really care about (and whom I want along for the ride in designing what comes next)... how can I transport them in a horseless carriage shaped like a horse?

That's the question I posed to my chatbot friends to help me with a communication project. I started thinking across different stakeholders in the legal world: What are the priorities of the different stakeholders? Where might they be most fearful or worried? What are the things they're most concerned about? How can I tailor my message and what I'm talking about to all those different audiences? And it's been great. I mean, obviously you're a wonderful thought partner for how to talk to different audiences—you're my favorite thought partner—but you're not always available to me. 

So having this back-and-forth with an AI—and I'm creating documents, saving versions, iterating with different messaging, seeing how it feels for me, figuring out the best way to do this—it's just been a great thought partner on something fundamental to what I do every day. It's great to have a 24/7 thought partner. I have random thoughts sometimes and I can go right back into my conversation with the AI and it incorporates them. Having a full-time thought partner is my AI Aha! moment this week.

Jen Leonard: What is the horseless carriage... now that you're describing that, I'm picturing mannequins of your thought partners who are generative AIs. But what do you think are the horseless carriages for lawyers and judges?

Bridget McCormack: I think it's really important to remind lawyers and judges how much of what they do as humans is critical to their work. I mean, yes, there are things that the technology can already do much faster and with fewer mistakes—we know what those things are. But there are other things that AI absolutely cannot do, and those are more important than anything else. You know, there's this robust literature about procedural fairness in courts. I'm going to butcher it by making it really succinct, but essentially: People are willing to accept bad outcomes in court if they feel they were treated with respect and dignity—if somebody looked them in the eye and explained why they lost and what was going to happen next and why. People are really willing to accept bad news when that happens. The human part of that interaction is the most important part. Whatever the legal question was that led to the loss is probably something that humans are also important in (though other tools might help humans with that part). And then there's the boring stuff—like writing it up, filing it, blah blah, all the boring parts. If lawyers can really lean into the human parts, it's going to be so much better for their clients, for the users, for our communities. Emphasizing how important the human parts are is the horseless carriage shaped like a horse, I think.

Jen Leonard: No, I think that's right. I mean, it seems to be persuasive to lawyers and judges who are uncomfortable with the technology, but also it resonates with the folks who've tried it and still feel like they as humans add value. Yes—because of the things that AI can't do as well as we can.

Your story reminds me of a video I used to show in my design thinking class from IAALS (the Institute for the Advancement of the American Legal System—I know you're on the board there). It was a design thinking activity for self-represented litigants who had just completed their own divorce hearing. So they've gone through this very difficult emotional and legal process by themselves. I was always struck by the fact that a couple of the interviewees would say, "I'm just so grateful that you asked me about my experience."

That made me sad, too—because just asking them felt like such a big change from what they were used to. They didn't have a great experience in most cases (understandably, because it's very difficult to do that on your own), but just someone reaching out and asking how it was made them feel better about the process. I think we could do a million times better, but...

Bridget McCormack: That's the example I always go to, the people who can't afford lawyers and how important it is to translate complicated, stressful processes for them. That's usually where my brain starts. But it's also true at the completely opposite end of the market, where your strategic partnership with a client that has hard decisions to make is going to be your highest human value. I'm not saying you can't have a thought partner in your AI—you probably can—but evaluating the real-life implications of a bunch of different possible strategies is something humans are going to do with other humans, right? Those humans are going to be able to talk to each other about what's scary about this path versus that path. And I think that's the horseless carriage shaped like a horse: this is good for humans.

Jen Leonard: And also, as you're talking—and I'm putting you on the spot here—you texted me this morning about another cool study in the medical context. Did you want to share about that?

Bridget McCormack: Yeah, it was radiology mammography, where generative AI was far superior to humans alone and even humans using generative AI, right? (Which is similar to what we saw in that JAMA study.) It's not really surprising. There was a bit of back-and-forth about that result, with a number of doctors saying at this point it's kind of malpractice not to use AI. As a consumer of healthcare, that kind of finding makes me excited, because I get that humans can make mistakes—I understand that, because I am human—but having AI assist with the things technology does better will make it much easier for your doctor to then have the human conversation with you about your options once you know what you're dealing with. It's the same thing—or at least analogous—in law.

Jen Leonard: That's exactly what I was thinking when you mentioned strategic partnership—the statistics from that study you sent me. I recall it was about a 30% higher rate of detecting breast cancer in mammograms. That's amazing. I mean, 30%! But to your point, that doesn't mean that the whole thing is over once you have that detection. You then need a human doctor to help you navigate what is a really difficult situation and help you strategize and plan, in the same way lawyers do.

Bridget McCormack: I am a WebMD, I'm a very talented WebMD but I don't end up counseling very many patients. I could imagine the confidence I'd have if I already knew I had the right answer on the diagnosis, and so I could move right to discussing what the patient is dealing with emotionally and what their choices are. It would give me a lot of confidence in my future patient interactions.

All right, how about you? You must have had some AI Aha! moments this week. Which one are we going to hear about?

Jen Leonard: So, I always have the most elementary examples—yours are always cool and creative—and this week is no exception. But mine also ties into one of our vocabulary lessons a little bit. There's a concept called RLHF, which is reinforcement learning from human feedback, where humans provide correction, refinement, and alignment to AI outputs to make the model stronger and fine-tune it. So I was thinking in reverse: I'm working on a book project right now, and for part of the book I'm trying to self-educate around change leadership frameworks and the change management literature. I have all these books that I'm reading, and at the end of every chapter, I'm now going into ChatGPT and asking it to create a quiz for me to reinforce my own learning.

The reason I think this is interesting is that I did this maybe a year and a half ago with another book I was reading—it might have been The Innovator's Dilemma, actually—and it was not that great at the time. Some of the quiz questions it generated were confusing or flat-out wrong. But not this time! I mean, now I can say, "Give me 10 questions from Chapter Two of X," and it knows what Chapter Two is about, it comes up with relevant questions, it gives me true/false and multiple-choice options, and if I ask it to mix it up, it will. I really feel like that 10-minute little intervention helps me understand the material in a way I wouldn't on my own. I get excited about the educational applications of generative AI.

I think about people who are not in formal educational programs being able to help themselves learn better, which I think is very cool. So it was like... reinforcement learning from AI feedback—RLAIF?

Bridget McCormack: I think you told me once that your kids are not using AI at school. Is it something you could imagine like, I don't have kids at home anymore, but if one of your kids got interested in a topic and it's not something the school has a curriculum around, can you imagine building a tool like that for your daughter or son about a topic they were interested in?

Jen Leonard: Totally. I mean, I'm at that stage—they're eight and nine, in second and fourth grade—and I'm at the stage where I can't do their math anymore (kindergarten math was my max!). Sometimes the homework says, "Spend time with your student working through their math problems," but they learn math in a totally different way that I don't really understand. But I can imagine working with them using a chatbot to reinforce their learning.

The challenge we've talked about before—and I don't think I fully understand the instructions that come home—is figuring out how to not have them use it as a crutch. My son and I were talking about this when they were doing their Revolutionary War unit and he was assigned to be a Loyalist. I asked ChatGPT (using the voice mode) to describe the arguments the Loyalists would have around the Intolerable Acts, and it basically summarized it in a way that a nine-year-old would understand. My son's response was, "Could you play that again so I could write it down for class?" And then I realized, okay, this has to be done in conjunction with a parent!

But definitely, I could see it. And I could see it in professional education too—really, at all levels of education. I think Sal Khan is doing interesting work with it, which is really exciting for

Bridget McCormack: I think the reasons you mentioned. For places where people don't have access to formal education, it could be a game changer, which will be good for the future, good for innovation, good for all of us.

What Just Happened? DeepSeek vs. OpenAI: A New AI Power Shift?

Jen Leonard: Okay, so What Just Happened? A whole lot just happened in the past couple of weeks. We've identified four things that have happened in the last two weeks, and we're going to do a super quick survey of them (and then use future episodes to dig into them a bit more and talk about what they might mean for legal). Maybe, Bridget, you could kick us off with Stargate, which sounds like a very cool movie?

Bridget McCormack: Sure. I think Stargate was announced right after (or maybe right before) you and I recorded in Philadelphia. Two weeks ago feels like forever ago, given everything that happened after that. But Stargate was a joint venture between OpenAI, SoftBank, and Oracle aiming to invest up to $500 billion in AI infrastructure. Apparently they've already broken ground. It's actually a project they've been working on for some time, but it was announced from the White House, with the President present and seemingly endorsing it. It's not a joint venture with the government in any way (there's no government money involved), but I think announcing it from the White House with the President's support and endorsement was symbolically important. 

There was a bit of back-and-forth on social media after the fact between OpenAI and one of their competitors about whether the money is really there, but it seems like an awful lot of money is already committed. There's pretty good confidence they'll be able to raise the rest to build this enormous infrastructure project for future AI efforts. Did I miss anything, or is that about right?

Jen Leonard: No, that's my understanding. I thought it was amusing that it really has nothing to do with the government, but—as you said—the optics are that the United States is taking its investment in AI infrastructure seriously for global AI prominence or supremacy, I guess.

DeepSeq.AI

Bridget McCormack: Yeah, which is interesting because two days later, we started hearing about DeepSeq.AI, which I think everybody's probably heard about by now. If you look at any newspaper at all, you know something about DeepSeq.AI. What's DeepSeq.AI?

Jen Leonard: So, DeepSeq.AI (and I can't remember the exact timing) might actually have been disclosed the day of the White House announcement, but it didn't pick up press coverage until that Thursday. Commentators are saying it was very intentionally announced because of the AI arms race between the US and China. DeepSeq.AI is a Chinese AI company that was established purely as a research organization—they weren’t consumer-facing. Unlike the American tech giants (who are trying to productize everything), they were just focused on advancing AI. And interestingly—uniquely, at least compared to US companies other than Meta—they use open-source technology, meaning they offer the underlying source code to the public. So, unlike a ChatGPT where you and I cannot get access to the algorithms and underlying weights and all that, DeepSeq.AI released an open-source model.

This company released a model called DeepSeq R1, and on benchmark tests it seems to perform comparably to the leading US models, including OpenAI's GPT-4. But reportedly it was developed at just a fraction of the cost that American AI companies are spending, using far fewer GPU chips than US companies typically use. You could see that reflected in the stock market: Nvidia’s stock (and Nvidia, of course, is the chip maker that provides all these GPUs) fell by about 17% on the news of DeepSeq.AI's emergence. It really roiled the markets. It seemed to catch the American tech CEOs off guard—though all of them came out and said this was amazing and that they're invigorated by the new competition.

I listened to a lot of the commentary about it—much of it was confusing, with lots of numbers and debate about what really happened—but to me, the biggest takeaway is that this really could democratize AI development. It could change the narrative that we can only place faith in these huge tech companies to develop AI. I think that's exciting because we'll see more competition and a lot of startup activity. That was my takeaway from DeepSeq.AI. What did you think? Anything I missed?

Bridget McCormack: No, that's the story. To put a finer point on what it might mean for expanding the market—expanding who has access to build the technology—you can look at how many people built their own models based on DeepSeq.AI's open-source model once it was released on Thursday. Hugging Face (which is an online platform where people share AI research and models) reported that by Tuesday—just four days later—there were 600 new models built with DeepSeq.AI's technology. Now, there's a lot of debate about whether the DeepSeq.AI team actually did it for a lot less money, and there's a bit more nuance to it than we first heard. But they certainly found a different and more efficient way to train the model.

I didn’t really understand what they did until I watched a video yesterday that explained it really well. Basically, they cut out two of the big training stages that the American companies have all been using, by using a pretty innovative engineering technique. If that's right, it means everything's going to accelerate even more than it already has. So it's definitely interesting. It's definitely going to change the market a little bit, and that's exciting.

Jen Leonard: We've also listened to commentators hypothesize that this could mean the huge big tech companies that everybody's been fawning over (and pouring money into) might ultimately not enjoy the kind of massive revenue streams they think—because the gains will be distributed across consumers and startups. I find that kind of amusing, given that the last two years have been all about waiting for these same few players to announce things and tell us what the future will be.

So, super interesting. One thing I did not know—and we haven't talked about this, so forgive me for asking—is: Meta in the US is the only major tech company that also offered open-source AI and also has fairly cutting-edge models. So I wasn't sure why DeepSeq.AI's emergence was so different from Llama's emergence (where people could build on top of its open weights). Have you heard any conversation about that?

Bridget McCormack: Not really. I mean, the one thing I think I understand about it goes back to what I just explained, which is that engineering shortcut that DeepSeq.AI pulled off. Both DeepSeq.AI and Llama are open-source; both are available for developers to use. And I think Zuckerberg had a strategy about open-sourcing (which he has stuck with), and he thinks it's now vindicated by how DeepSeq.AI has taken the world by storm. I don't know if any of that's true. 

But even though they've all opened their weights, DeepSeq.AI still did this innovation in engineering that creates, I think, a better platform for lots of other innovators to use. So I suspect it might be a more attractive one. I don't know why everyone else can't just copy that now—because they did publish a paper explaining what they did. (Not that I can understand the paper, but there's a guy on YouTube who will explain it to you!) So, I think that's maybe the difference.

Zuckerberg does feel that his strategy was vindicated as a result of what's happened with DeepSeq.AI. And interestingly, I think Meta is the only one of those big companies whose stock has not taken a hit in the last week or so—so maybe he's right. I don't know.

Jen Leonard: I guess we'll find out. But I thought the whole thing was very, very interesting, and the timing was super interesting—especially coming on the heels of Stargate. So, we've got the announcement from the White House, then we have the emergence of a Chinese competitor, and then we have OpenAI responding in the same week, I believe, with something called DeepResearch. So tell us about DeepResearch, Bridget.

Open AI’s Deep Research

Bridget McCormack: Yeah, so DeepResearch is OpenAI's model that's somewhat like Google's "DeepResearch". I use Google's “DeepResearch” product quite a bit when I need a primer on something—just enough background information to start thinking about what I want to do about it. It's a great tool. And OpenAI built its own version of that, which I have not tried yet—I don't have access to it yet, although I was reading today about how Ethan Mollick has used it and how excited he is about it.

According to Ethan Mollick, OpenAI's DeepResearch tool does a deeper dive and finds its way around problems when it's doing research for you on a project—in a way that Google's product doesn't. I think he said the Google product is like an undergrad doing your research paper, and the OpenAI DeepResearch product is like a PhD student doing your research paper. Which is a pretty incredible tool to have at your fingertips! It's not going to give you an immediate answer—it can take anywhere from 5 to 30 minutes, they say (maybe longer if it's a really hard question). But to be able to have a PhD-level memo on a question within 30 minutes…

I mean, it's a stunning game changer for anyone who needs research as part of what they do—which I think is most of us, right? There's always something you need a little research on. So, again, when I get access to it, I'll come back and talk about it, but it definitely seems like a big deal. I don't know... have you read any other reactions to it?

Jen Leonard: I've been following Ethan's posts about it, and he's been sharing other academics' posts about it. I think he had one this afternoon from a fellow academic who disclosed that he used this tool and it generated a paper that was then published in a peer-reviewed journal. (I'm not sure how quickly you can get peer review done—maybe it was AI-driven peer review!) But yeah, if you just think about—going back to the democratization point—why we're both very optimistic about these tools...

If I were to sit here in my house and try to find a PhD-level student to help me with a problem or a project or to get a thought out into the world, there are so many friction points that I would probably never do it. The idea that I could put in a prompt and five minutes later have a response at a PhD level—and then, as I understand it, interact with DeepResearch and have it push my thinking in a way that's a little different from the Google product (which is more like a very nice summary and portfolio of sources)—I think that's mind-blowing.

And, bringing it back to law (which we'll discuss more in a future episode), this is what people hire lawyers to do: deep research and analysis, right?

Bridget McCormack: Yeah, I think it's going to be very impactful for lawyers. Again, it's another helpful tool so you can focus on those human elements—the horseless carriage shaped like a horse.
All right, one more What Just Happened. OpenAI also released Operator. What's Operator? Do you have access to Operator?

Open AI’s Operator

Jen Leonard: Now, I have mainly followed this through our friends on the Hard Fork podcast. I listened to their review of it. We've talked before about agentism—the idea that you could give an AI a task and it could independently perform actions to achieve that task. When we do presentations, I always tell lawyers: imagine not just generating an itinerary for your vacation, but then having the AI book the restaurants, book the airfare, all of these different things that you would otherwise have to do yourself.

Operator is OpenAI's—I think—first attempt at real AI agentism, where you can ask it to perform tasks on the web and it will autonomously perform those tasks by interacting with different websites. It basically launches a browser within your browser, so I think it's fairly secure. They have discrete partnerships with companies like Instacart, DoorDash, Etsy, and StubHub.

So there are limited use cases so far, but basically, as I understand it, you would ask: "Buy me tickets to the Super Bowl."

Operator would open up an internal browser and go on StubHub and go through the process of finding you tickets. It asks you to confirm before it does any significant action. So if you're like me and you searched for Super Bowl tickets and found that it would cost $36,000 for your whole family to go, thankfully it would ask you before purchasing those—and I would decline. Then it would handle any sensitive information by stopping and asking you to take over. And it outright refuses certain higher-stakes tasks like financial transactions or sending emails.

I haven't seen this in action myself—I haven't seen a lot of demos—but the Hard Fork guys tried it out. One thing that made me laugh about their trial was that they tried to use Instacart to order groceries, but the address defaulted to OpenAI's servers in Des Moines, I think.

Bridget McCormack: Or maybe Instacart's servers or something—but yes, it was Des Moines. 

Jen Leonard: That's right—it was in Des Moines. So they were imagining all these pallets of milk being delivered to somebody's front porch in Des Moines. So, very, very limited right now, but it kind of makes me think of the movie The Net with Sandra Bullock. When you watch that movie now and she's ordering a pizza on her computer, it looks so clunky. I remember seeing that movie and thinking, "That's amazing that you'd ever be able to order a pizza on your computer." But I also thought, "That's never going to happen."

And now I can order a pizza from anywhere—from my phone, from my watch. So I thought Operator was interesting and definitely previews some things to come, right?

Bridget McCormack: Yeah, for sure. I keep hearing that "vertical agents" are what investors are interested in and what legal tech entrepreneurs are focused on. So it'll be interesting to see how that develops. And I think the DeepSeq.AI news makes all of those even more promising. We'll see—it's going to be interesting.

Jen Leonard: But today, Bridget, I really wanted to do a mini interview with you. As the former Chief Justice of the Michigan Supreme Court—somebody who has put a lot of thought and action into expanding access to court services—I want to talk about what opportunities generative AI creates for filings in courts and for people asking to have their disputes resolved in court. Maybe we could start by asking: what is it about generative AI that could lead to more lawsuits, and why do you think that's a good thing?

Bridget McCormack: I do think the ability for more people with non-frivolous lawsuits to get relief for those wrongs is a good thing. I think the more valid disputes we can resolve, the stronger our communities will be and the more confidence people will have in the rule of law. So I'm in favor of any way we can figure out how to resolve disputes that need to be resolved. And there are many right now that don't get resolved. This is something I thought a lot about in many different contexts.

But one that always bothered me—and I tried a couple of things to address it—was personal injury cases where there was no dispute about liability. Basically, everyone agreed who was at fault. But if the damages were under a certain dollar amount, there was really no market for lawyers to bring those claims.

They're too complicated for people to bring on their own. The filing requirements and procedural requirements for a civil lawsuit—even one where you're only alleging, say, $50,000 in damages—are really burdensome for a person without a lawyer. Obviously, $50,000 for a lot of people might be the difference between staying in your home or not, or sending your kids to school, or feeding your family. So it's a significant amount of money to many Americans. And yet, the market for legal services right now has no place for those cases to go. There just isn't a way for lawyers to take on those cases because the cost of litigation makes it impossible for the numbers to work out. I heard that repeatedly from lawyers who do that kind of work.

There's an innovation—maybe in other states as well, but I know it in South Carolina, of all places—called summary jury trials. Basically, the plaintiffs’ bar and the defense bar agreed on a way to resolve those cases that just don't make sense to litigate in a formal court process, because by the time you get through discovery there's no money left for the person who was harmed, and even the lawyers (on both sides) have lost money. It costs everybody too much to get to a result.

So they created this process called summary jury trials. It's essentially a private process that the bar agreed to. They use a courtroom and the jury pool that’s been called to the court, but they have a one-day trial. They limit the number of minutes each side can use. They have a high-low agreement at the beginning of the case—like agreeing that “this case is either worth $25,000 or $75,000”—and the jury is really only deciding between those two numbers. They have a lawyer who plays a judge (wears a robe and everything).

So it's basically a streamlined jury trial process. The people involved still get to have their day in court—they get to tell their story, say what happened to them—and the jury makes a decision. And it works incredibly well. It works because some smart lawyers decided to solve a problem, and they did. My husband actually wrote an academic paper about it once. He went down there for like a month and watched the proceedings, and wrote a very cool paper about it.

I tried to implement that in Michigan, but it never took off. I felt like the lawyers on both sides thought something was fishy—they couldn't trust it. I kept thinking, "How about medium claims courts?" Like, we needed other ways to resolve this category of cases for which the market couldn't respond.

And I think now we might have it. I think this technology is reducing the barrier to resolution in such a significant way that it's finally giving that category of cases a place to go. I mean, there might still be problems if significantly more cases get filed in courts—we can talk about that in a minute—but I do think that the barrier to entry (those startup costs that make it impossible for lawyers to file a case where nobody disputes that someone was harmed and someone was at fault, but the injury is "only" worth, say, $75,000 or $50,000 or $25,000) is coming down. And I find that kind of exciting.

Jen Leonard: I actually represented my parents pro bono once in a case like this—for $15,000—which to them was an enormous sum of money. They had been wronged by an insurance company and nobody was going to take that case. They were lucky to have in-house counsel (me, their daughter) who loves them very much and was willing to help. It involved full-blown discovery, going to the courthouse, having hearings and so on, and we did prevail. But it was a very unique circumstance, and you add that together in the aggregate...

So you've talked about personal injury claims, business torts, some insurance disputes. Are there other categories of cases that fall into this?

Bridget McCormack: I probably haven't thought of all of them, but I suspect it's any civil wrong where the damages are under a certain dollar amount. All those cases are just priced out of the market. I hadn't thought about your example—every once in a while you have a kid who loves you, or you're a lawyer so you can figure out how to navigate the process and it's worth it to you. But for the most part, there's no way for even lawyers who would like to help to make that model make sense, given the costs of litigation as they are now.

Jen Leonard: Yeah, and it's a story for another day, but even as a lawyer it was an incredibly difficult process for me to figure out how to litigate in a county where I didn't normally litigate. It just shows how difficult it is, no matter what your training is.

But it's funny—on this podcast we take an optimistic view, yet I drifted into my old lawyerly tendencies when prepping for this episode. All of my notes were about the dangers of frivolous lawsuits. That's what I was thinking about.

As we were prepping, you said this concern is what you hear most frequently. (We'll put deepfakes to the side, because I know there are concerns about that too.) But why do you think the automatic jump for lawyers and judges to "this is going to open the floodgates to everybody suing one another" is misplaced?

Bridget McCormack: I do hear this sometimes when I present to courts, or at least I used to a while ago. I think the conversation has evolved a lot just in the past year, compared to last year or the year before. But there is this worry: "Gosh, are we going to see thousands and thousands of frivolous lawsuits because these chatbots can spit them out and they'll just automatically file them?"

I usually can make most courts feel better because their tech infrastructure is so outdated. Like, automatic filing from a chatbot is far off. And sadly, even for innovative court leaders... When public funding is how you staff your IT team, it's going to be a while, right, before they could handle thousands and thousands of automated filings.

And look—I think these tools are already pretty good, and you definitely could tell one to draft... well, you'd have to know what to tell it to draft, but you could probably tell it to draft a demand letter and maybe a complaint. 

If you even knew the term "complaint," you might ask it to teach you what you need to do to sue somebody, and it probably could do that. And then you could probably tell it to draft that thing. But then figuring out how to actually file it is almost impossible (with current court systems). I mean, this issue probably shouldn't even be among the top 30 things we're worrying about right now.

Instead, I think we should be focused on the idea that maybe we can figure out how to resolve these disputes that everyone agrees need to be resolved. If there's a dispute about whether someone's liable, well, that's a different category of case—and obviously we should figure out how to resolve those disputes as well. But for the cases where everybody basically agrees who was at fault, but nobody can afford to get that harm addressed—this should be good news.

Because the fact that our legal system can't respond to that right now is harmful. I mean, it's a damaging situation when you have to explain to someone that even though they were harmed, and nobody disputes that they were harmed or who was at fault, we don't have a market that can help them—because they weren't harmed enough. Telling someone that $50,000 of injury isn't an injury the legal system cares about... that's not a great message, right? So if we have a way to resolve disputes that are lower-dollar (but very significant to the people who were harmed), that's probably good news.

As you know, now there are legal tech companies—startups, and some companies that have been around a little longer—using this technology in their business models aimed at this segment of the market. So I think we're already seeing some solutions here.

Jen Leonard: And you have a few examples of those. What are some of the companies emerging to tackle this?

Bridget McCormack: I've seen stories in the last month or so—more than just these two—but for example, I've seen three or four different pieces about EvenUp on the plaintiff's side and LegalMation on the defense side. (Full disclosure: I know the team at LegalMation, and they're very impressive.) 

The EvenUp team has raised a considerable amount of funding in the last few months. They're basically doing that front-end work a plaintiff's lawyer has to do to get a case ready—to figure out if it's a case they can file. That means determining whether they can prove someone was liable for a harm, and whether the harm is sufficient to make the case financially viable. There's a lot of upfront work: gathering documents, tracking down information a lawyer needs to evaluate the case and decide if the numbers make sense. EvenUp's platform is doing all of that, and then it's putting together demand packages.

It's giving attorneys who do that work this brand-new tool to assess case value, figure out which cases they can put in which process, and they're able to file significantly more cases. In fact, EvenUp reports that law firms using their platform have collectively claimed or settled more than $1.5 billion in damages, which is significant considering they've only been using the generative AI component of their work for a short time.

So that's pretty interesting. Again, I'm sure there are others like EvenUp on the plaintiff's side (we're not endorsing particular companies or anything). LegalMation is one that's been around a bit—they were using technology for years, even before generative AI, to enhance efficiency in litigation. I don't think they only work with defense firms or corporations; I believe they'll work with anybody, because their technology can be applied broadly. But they now leverage AI to automate a lot of aspects of litigation as well. I've seen a number of articles mentioning both EvenUp and LegalMation. So I think this is a segment of the market that this technology is going to change—and I, for one, think that's mostly good news.

As you know, one of the things I think about all the time and I'm constantly trying to figure out solutions for is that we need more effective operating systems now. Maybe the parties will be able to work things out themselves and they won't need courts, or won't need arbitrators or mediators—but when they do need those, I want our operating systems to catch up to these new tools for the buyers and sellers of legal services.

Jen Leonard: Very cool. And it connects all the way back to our What Just Happened segment—the acceleration and democratization of this technology, and all the cool ways it can be developed by entrepreneurial minds. It also seems like, I was thinking about this in another conversation, it helps connect the statutory remedies that the legislature creates with an actual market where people can take advantage of those remedies or assert their rights. Right now, the legislature might create remedies, but there's nobody to help the average person actually execute them. So there's a gap between those two things. This tech feels like it improves how different branches of government connect and function together as well.

Bridget McCormack: You're exactly right. 

And I've seen recently that the frontier tech companies are now building products for governments. Did you see that? OpenAI had a big announcement about it. And a couple of other companies have announced that they're investing in building products for governments. That's very exciting, because I was a little worried that all the activity in the market would be aimed only at those who can pay the most for the shiny new tools. And that's not usually your local or state or even federal government, frankly. But in a lot of ways, if governments get access to these resources, they can really provide better service to the people they serve. So that's exciting.

Jen Leonard: Very cool. Well, thank you for course-correcting me on my risk-spotting, issue-spotting lawyerly brain around frivolous lawsuits! We hope everybody out there enjoyed this optimistic look at what the future could be—where people have better access to the places that can resolve their rights and responsibilities more efficiently. We hope you'll join us on our next edition of 2030 Vision: AI and the Future of Law. Who knows what will have happened by then? Thanks to everyone out there for listening, and we'll see you soon. Be well.