Can AI Help You Win in Court? A New Era of Self-Representation

In this episode of AI and the Future of Law, hosts Jen Leonard and Bridget McCormack dive into two eye-opening case studies. First, they unpack a real-life appellate victory in California, where a self-represented litigant using only AI tools successfully overturned a $55,000 judgment. Then, they dive into a bold experiment by Supreme Court litigator Adam Unikowsky, who used a large language model (Claude) to simulate a full oral argument (one he actually delivered in the U.S. Supreme Court).

Together, these stories raise critical questions: Can AI level the legal playing field? Should “robot lawyers” be allowed in appellate courts? And how might this technology change the nature of legal advocacy, access, and trust?

Key Takeaways

  • AI as a Partner: Generative AI can equip everyday people to navigate complex legal systems.
  • Robot Lawyers in Action: A Supreme Court litigator tests whether AI could argue cases better than humans.
  • From Skepticism to Pilot Programs: Why appellate courts should consider testing AI-based oral arguments.
  • Custom AI Coaching: From fitness plans to group dinners, ChatGPT’s practical use cases keep expanding.
  • Justice System Potential: AI could reduce barriers in overburdened trial courts, especially for pro se litigants.

Final Thoughts

This episode bridges everyday AI use and groundbreaking legal experiments. From nutrition plans for cyclists to appellate briefs for self-advocates, AI’s role in our lives—and in our justice system—is growing fast. As courts and lawyers adapt, thoughtful pilot programs could redefine access to justice.

Transcript

Jen Leonard: Hi everyone, and welcome to the newest edition of AI and the Future of Law. I am your co-host, Jen Leonard, founder of Creative Lawyers, here as always and thrilled to be  joined by the fabulous Bridget McCormack, president and CEO of the American Arbitration Association. And we are thrilled to be partnering in our newest format, in our new season of AI and the Future of Law, with the great team at Practicing Law Institute.

And we’re excited today to be talking about a big AI win for a self-represented litigant and some affiliated stories related to self representation – maybe even some “robot lawyering” at oral argument. But as we always do, we are going to start our conversation with two segments that we always lead with. Our AI Aha! Moments – things that Bridget and I have been using AI for that we find particularly interesting and that may inspire our listeners to try in their own lives – and our What Just Happened? segment, covering things that are unfolding in the broader tech landscape that our lawyer audience might not be familiar with, and how we’ve been using them. Before we dive into our main topic: Hi Bridget, how are you today?

Bridget McCormack: Hi Jen, I am good. It’s great to see you. I’m excited about today’s episode and some of the things that have been happening in AI since we last got to talk about it. Among many other things that have been happening – like you introducing me to Arrested Development, which I guess I’m like 10 or 20 years late to, but it’s so exciting.

Jen Leonard: Never too late! I cannot wait for the text exchanges with you late at night. I’m now rewatching Arrested Development just to be on the same page with you. And so many good lawyer characters in that show.

Bridget McCormack: I know – really good lawyer characters. Amazing.

Jen Leonard: So, what’s your AI this week, Bridget?
AI Aha! Moments: How I Used AI to Train and Host a Party

Bridget McCormack: So I had a hard time choosing an AI Aha! this week. (I’m actually going to talk a little bit about another one in our “What Just Happened” section later.) So I’ll go with my century ride nutrition help. My husband and I did a century ride on Saturday – a 100-mile bike ride – and last year we did the same ride. I was quite stubborn about eating when we started really early. The only way to get through 100 miles in a day is to start fueling early, but I was stubborn about eating the bike food you have to carry in your shirt because it’s all kind of gross. I was already tired and I didn’t want to eat gross things. He kept basically saying, “You’re just going to do yourself in – you’re going to run out of steam. You have to eat before you’re hungry, you have to drink before you’re thirsty,” and I know that’s what all the advice says. But, ugh – the food’s gross, and I just really needed, like, an actual plan from a real expert. (Laughs.) I love you, honey, but I needed my real expert to map out for me: What should I eat the night before? What should I eat when I get up at 5 AM? What can I eat from 5:00 to 6:30 that will help me? And then how should I space out my eating?
They had a rest stop at ten miles, and at ten miles I’m thinking, “We just got started, I don’t want to stop and eat.” But ChatGPT convinced me – which, you know, my husband Steve tried and failed to do – that I had to stop, get off my bike, drink the pickle juice, and have a little bit of peanut butter and jelly sandwich. And ChatGPT told me that doing those things early was going to be more important than probably anything else I did on that ride.

It was like a totally different ride for me compared to last year. Last year I just survived, and this year I was like, “I’m doing great!” In fact, when I was done, I felt like if I had to do it again, I could have done it again. It was super helpful. I mean, obviously I could find out these things from Google, but I gave it my training history, my age, my general pace – you know, I sort of told it about me and what I hate about the bike food (the texture of tofu and all of it – it’s like you put these gross things in your mouth). And it came up with such a specific plan for me. I said to myself, “I’m following this specific plan.” And it worked. It was amazing.

Jen Leonard: I mean, we’ve talked about fitness and persuasion and encouragement before as AI use cases. I feel like I’m not in the fitness industry, but if I were, this is absolutely an area where I’d leverage the technology.

Bridget McCormack: You know, I bet there’s a personalized AI coach you could have on your phone.

Jen Leonard: Totally. And like you’re saying – you could Google this stuff – but the magic of it is the customization. You can say, “I really hate this element of it; make it more appealing for me,” and it will.

Bridget McCormack: So anyway, it was amazing. How about you? What’s yours this week?

Jen Leonard: Well, first of all, congratulations on the ride.

Bridget McCormack: Thank you.

Jen Leonard: That’s incredible. So, mine was social. I had a whole group of friends over this weekend – which you know I love. It was one of those really fun weekends where we ended up having an impromptu party at our house. By the end of the day we had six friends over, and we were ordering from this great Indian restaurant in our neighborhood.

I really get a lot of anxiety about making the executive decision on what to order for a large group of people. I never order the right amount of food, and I never know what to order for people. I have friends who are really, really good at doing this – my friend Kate treats us all like toddlers when she orders, and I’m really grateful for that! But I’m not that person. And I’m also always genuinely curious about the advancing capabilities of the AI to interact with different websites.

So while everybody was downstairs, I came upstairs and I asked ChatGPT to look at the URL for the restaurant. It was kind of a tricky website – one of those Grubhub-type menus with submenus for appetizers, entrees, desserts, etc. I told it how many adults and how many kids we had, and the different spice-level preferences, and I asked it to pick popular dishes with a variety of spices. And it went through all the different parts of the website and it generated an order for me that I then just put into the Grubhub app. And it was perfect. The order came, and it was a crowd-pleaser. It was the perfect amount of food.

It turned out some of my friends at the table are not big fans of AI – they were not thrilled that I used it. They even had a whole conversation about how “algorithms are destroying the experience of eating” or something. But for me, it was great because it took away all my anxiety. It took me five minutes, and everybody was satisfied.

So that was my AI Aha! for the week. I was really delighted.

Bridget McCormack: Yeah, little things like that – just not having to spend the mental energy on worrying whether you’re getting it right. It’s such a nice little break, right? You can spend that energy on the next thing – like, spend it enjoying the conversation with your friends instead of stressing over the order.

Jen Leonard: Totally. And if you love ordering for large groups of people, have at it – curate it yourself. But that is not me.

What Just Happened: AI Agents Explained: ChatGPT’s Most Powerful Update Yet

Jen Leonard: So our next segment is our What Just Happened? segment. We know that for busy lawyers, you likely have your head down working on client matters or, if you’re trying to integrate AI into your practice, you’re focused on the tools in your firm or your legal domain. But a lot is happening in the broader tech landscape that may not have reached you yet, even though it eventually will – and what just happened this week was pretty big.

It came out of OpenAI – which, of course, is the company responsible for ChatGPT, and it seems to be running away from the competition in the generative AI space. This week they released ChatGPT “Agents.” We’ve been talking a lot about agent-ism generally – the idea of giving a generative AI tool a task to do and having that tool go off and not need you to direct every step it takes to achieve the goal. ChatGPT Agents are sort of the embodiment of that idea.

It brings together a few different things that OpenAI has rolled out over the last couple of years. One of them was something called “Operator” from last year, I think – that was the ability for ChatGPT to actually engage with websites. You could see it going into your computer and using the mouse to navigate within your computer’s interface and play around with web pages.

It also brings together the “Deep Research” feature, which was one of the earlier tools OpenAI released – we both use Deep Research all the time. It can search across hundreds of websites to produce a synthesized research report on any topic under the sun, and then of course it has the conversational fluency we all know and love from ChatGPT that kicked off the generative AI era.

So ChatGPT Agents combine these elements and allow you to ask ChatGPT to perform an action, and it will go off and engage with a computer interface and do deep research across different websites, talking back and forth with you to achieve practical goals. It could be anything from “Take a look at my calendar and tell me what meetings I have coming up, then create a research report related to those meetings so I’m fully prepared,” to “Plan me a family vacation,” or “Here are my kids’ back-to-school lists – go out on the internet, find me the best deals, and maybe eventually even buy those things for me so I don’t have to think about it.” I would be delighted not to have to think about those things!

And these Agents are performing really impressively on certain benchmarking tests – outperforming a whole host of existing measures. We could get into the weeds on those, but some examples include something called “Humanity’s Last Exam” and a test called “Frontier Math,” which is the hardest known math benchmark test. Early reviews of the system show some very impressive capabilities in how well it handles tasks on the web. Some people who’ve been testing it have double-checked the sites it accesses and confirmed that the data it pulls is accurate. Its versatility is remarkable – it can navigate a lot of different websites and use things like Excel files, PowerPoint files, all sorts of interfaces.

Now, a lot of people have reported – and I’ve experienced this in the few times I’ve tried to use Agents since it was released last week – that it can take a long time to perform certain tasks. And it requires you to sit with it, because occasionally it will need you to log in to various accounts or input passwords, things like that. So that’s an overarching summary of what it does. It’s early days – I think this literally came out just last week.

Bridget McCormack: Maybe Thursday or Friday. I actually didn’t get access to it until today – although I suspect that’s because I forgot to log out and log back in to my ChatGPT account (the equivalent of turning your iPhone off and on when an app doesn’t work!). I kept going, like, “Why don’t I see ‘Agents’? Where’s ‘Agents’?” And as soon as I logged out and logged back in, there it was. Some people did have early access – it looked like Ethan Mollick had early access, and Allie K. Miller too. It looks like she had access and is pretty impressed with what it’s doing.

Jen Leonard: Yeah. I started playing with it as well. I asked it to do two things for me, and I was sort of underwhelmed – but I haven’t played with it extensively yet. For one task, I was working on a project that I felt a little behind on, and I explained the project and asked the Agent to come up with a plan to get me up to speed on a whole host of deadlines. In fairness to the Agent, that probably wasn’t something I needed this capability for – it thought for a really long time and then the output was something similar to what GPT-4.0 would have given me. Again, I probably only needed GPT-4.0 for that.

Then this morning I shared some of my internal business files with it – I use ChatGPT Pro, which is how I have access to Agents – and I asked it to help me think through some revenue projections for the years ahead and give me some recommendations on diversifying revenue streams. But I hadn’t given it a lot of context, to be honest – I was really just doing it to prep for today’s podcast. It did give me a new Excel spreadsheet and some recommendations, but they were pretty generic. I think that’s because it had no context or direction from me. It did go into my spreadsheet and identify specific service lines and give recommendations like, “You could probably build this area based on this market trend,” which was interesting, but it wasn’t anything I couldn’t have come up with on my own. But I know you experimented with it and had a different experience.

Bridget McCormack: Yeah. I had a great morning with it. In relation to something you and I (and PLI) are going to start working on, I got really excited. I had lunch with Sharon Crane, the CEO at PLI, last week – she’s amazing – and it got me excited about an upcoming meeting we have to brainstorm some next projects for our collaboration. PLI is very sophisticated about navigating CLE requirements, which – if you’re thinking about building next-gen education tools for lawyers – you’re going to have to account for current-gen CLE requirements, and those are complicated and vary across all the different states.

I wanted to better understand the regulatory environment that continuing legal education falls under. I know Sharon knows all of this and could probably teach it to us, but I didn’t want to bother Sharon with the basics – I wanted to understand it better myself. And then I wanted to identify if there are any CLE providers or even startups doing work using gamification and AI to build next-gen courses, and what that might look like.

The Agent did such an interesting market scan. You know what it kept coming up with as the main finding? Our class – our first collaboration with PLI – which is kind of funny. Then there were some others working around the edges of this space.

After that, I wanted to learn more about why gamification works so well in continuing education in other fields – like in corporate training, where they have really impressive KPIs with gamified learning. And obviously I know what some of the barriers are for lawyers – and they’re understandable barriers.

But I asked it to help me brainstorm how to make that transition for lawyers, given some of the great results you see in other industries. And then after all of this research, I had it put together a deck — just to talk to my own team in advance of getting together with you and the PLI team — so as not to waste anybody's time.

It produced slides. For the first time, I've had ChatGPT produce slides for me — I'm sure you have as well. And obviously no one can produce slides like Mary Ellis. She’s amazing. You and your team produce the best slides in the world. But sometimes if you just need quick ones, and you've been working on something in a deep research project — I’ve had to do it. And they’re always so boring.

For the first time, I saw the code it was using, and the slides were just beautiful. It produced graphs, translating market research into visuals, and it really produced a great deck. I was able to literally just ship it off to my team and say, "Let's discuss in our next meeting." I didn’t even send the research yet!

I think we're going to have fun with this one. I think it's going to be really useful for some of those projects that just never get done — the ones on your list where you're like, "Yeah, when am I going to have time to put that together?"

Jen Leonard: Especially PowerPoint! I mean, setting aside the beautiful decks that Mary Ellis creates — the ones you're using internally, or the ones where you want to share something with people but there’s not a ton of value add in you spending a ton of time making them. I've wanted something like this forever.

Bridget McCormack: I’m sure there's going to be a lot of other things I’ll find out. But so far today, I’ve been pretty impressed. So I think it’s going to be fun.

Jen Leonard: We always say, this is the worst version of this that we'll ever experience. And with lawyers — lawyers get so fixated on the limitations today. But this is where the technology is heading.

A year ago, I remember seeing Operator and thinking about — remember Sandra Bullock in The Net when she ordered a pizza on the computer? I still remember being simultaneously blown away and also thinking it was so clunky. And now, you can’t even remember a time when you couldn’t order a pizza from your phone. So we are at the beginning of that era, which is really interesting.

Bridget McCormack: By the way — last night we were at dinner, and we've been seeing a skunk in our backyard. So I was trying to figure out where skunks live. Like, I don't know what you do to make sure the skunk stays away from us when it's not happy or whatever.

One person was like, “Oh no, they dig holes.” And another said, “No, they live in this.” And I said, “Well, I can settle that.” So I asked my voice mode where skunks live — and it thought I said, “Where do elves live?” It gave me this whole answer like, “They live in magical forests.” 

And I was like, “I don’t think skunks live in a magical forest… because this is like, our regular backyard. It's not a magical forest.”

Jen Leonard: It thought you needed a bedtime story.

Bridget McCormack: My friends were like, “Really impressive technology you've got there.”

Jen Leonard: Oh my gosh. What you just said reminded me of something. I’m working on a writing project and I was using Claude. I was trying to decide whether to include this sixth factor in a list of reasons — but it was tricky because it got into ideological waters, and I wasn’t sure if I wanted to go there.

I told Claude, “Here are the five factors I already have. Should I add this sixth one?” And Ethan Mollick has been talking a lot about how he thinks sycophancy is a bigger problem than hallucination. So I kind of expected Claude to say, “Oh, yes, totally include it.” Like, I thought it would go along with me.

But it didn’t. It said, “I actually think what you have is really solid, and it's enough for the reader to hook onto. Adding this sixth one might distract from your argument.” And I found that super interesting and helpful. I’ve really been enjoying Claude as an editor and getting feedback from it. And it surprised me that it would actually advise against something I was considering doing.

Bridget McCormack: That’s super interesting. I’ve been following Ethan’s thinking on that too, and I sometimes even try to correct for that. Like I’ll say, “Here’s what I’m trying to do — now make the counterargument.” I’ll basically tell it to try and talk me out of something. Because if Ethan says it’s an issue, I figure it must be. I wonder if there's a difference among the frontier models on that.

Jen Leonard: Yeah, would be interesting to know.

Bridget McCormack: I always sort of expect it to just tell me my idea is great — because it feels like they're sycophants. I worry about that.

Main Topic: AI in the Courtroom: A Real Legal Victory

Jen Leonard: Well, that brings us to our main topic today – something I know both of us are really excited about when it comes to this technology in general – which is its capacity for expanding access to justice and access to legal services by putting these tools into the hands of people who can use them to advance their own legal needs. We have some really interesting new thought leadership on this, as well as some real-world experiences – some positive, and maybe some not-so-positive. So, Bridget, maybe you could describe a post you saw and shared with me from LinkedIn by a person named Zoe Dolan – someone neither of us knows personally – who posted something we thought was fascinating.

Bridget McCormack: Yeah. I saw this yesterday and sent it to you (as I do on weekends!). Zoe Dolan – who’s a lawyer we don’t know (though we’d love to meet her, if anyone knows Zoe!) – posted about a person who was in an appellate clinic that Zoe apparently runs, which teaches community members how to use AI if they can’t afford a lawyer to help with their legal issues. And one of the people in one of these clinics had to litigate an appeal in a housing case. The trial court had excluded some evidence – I guess it was a COVID-19 resolution that was in force at the time – and she had defaulted on her rent. She lost her case in the trial court as a result, so she was appealing it. There was also an attorney’s fee award against her – a $55,000 attorney’s fee – as a result of her loss.

She used AI – I’m guessing it was ChatGPT, though she didn’t explicitly identify the tool – to help her with the appeal. And she won her appeal. This woman, without a lawyer, won her appeal. There was a seven-page written opinion in her favor, and it reversed the trial court’s decision as well as the attorney fee award. That is a pretty amazing actual KPI for Zoe and her clinic’s work, right? It sounds like exactly what Zoe was trying to do – and it’s the first known victory of its kind that she’s keeping track of.

It’s also something you and I have talked about many times: by giving people access to the legal rules that govern their situation, you can really empower them to try and find solutions that otherwise they’d probably just give up on.

You know, most people who can’t afford a lawyer in a case like this would probably just take the loss and live with the judgment against them. I don’t think someone who couldn’t pay her rent is likely to be able to pay a $55,000 attorney fee judgment! And if the trial court’s decision was entered in error – which it sounds like, according to the California court, it was – then reversing it is obviously a better result, right? And that’s not a result she would have had access to before this technology existed.

So this story really stood out to us, and it reminded us of our friend Adam Unikowsky’s latest post. You know, I love how self-deprecating Adam is – he essentially empowered ChatGPT to do a job that he did in the U.S. Supreme Court this year. Why don’t you tell the listeners a little bit about what Adam did, and what he’s suggesting as a result of what he learned?

Jen Leonard: Sure. We’ve spoken about Adam before on the podcast. He has a great Substack where he talks about many things, but one topic is these experiments he runs with Claude (which is his preferred gen AI tool). Adam is an appellate litigator with Jenner & Block, and he has argued before the Supreme Court many times – without notes, which is terrifying (I need notes just to summarize his Substack articles!).

Anyway, he ran an experiment to support his argument that AI could be an above-average Supreme Court advocate, and that courts should permit “robot lawyers” at oral argument rather than discouraging the practice. He actually thinks oral argument should be the first place we integrate AI in court, not the last.

So he conducted an experiment using his own Supreme Court case, which he argued last year – Williams v. Reed. The way he did this was: he fed his briefs into Claude 4.0 Opus, and he had the AI answer the actual questions posed by the Justices during oral argument. And he created a fully AI-generated oral argument using voice mode. It’s mind-bending what he actually did (I don’t know where he finds the time to do all that he does, on top of his practice!). But you can go to his Substack and actually listen to the Justices ask the questions they asked in real life, and then have the AI respond in his place.

Adam concluded that all of Claude’s answers were clear, coherent, and directly responsive to the questions. And Claude (the AI) gave several unusually clever answers – making arguments that Adam hadn’t even thought of himself. Adam thinks that AI actually exceeds human performance at oral argument, as compared with other elements of being a lawyer, for several reasons.

First, because of how quickly it processes information. AI can process information really quickly, which is crucial in oral argument because after a judge asks a question, lawyers only have a couple of seconds to respond and come up with an answer. He demonstrates this in the Substack piece by asking the AI a hypothetical question about the 21st Amendment, and you can review the responses Claude gives – Claude answers it brilliantly in a few seconds, and Adam says it would have taken him hours to come up with the same answer (if he could have come up with it at all).

He also points out – and I can certainly speak to this – that as a 1L student going through oral arguments (and in any oral argument I engaged in thereafter by choice), it’s nerve-wracking to stand in front of a group of judges and answer questions on the spot. But AI doesn’t get nervous. It doesn’t get confused, it doesn’t give garbled answers or make grammatical errors, and it doesn’t get lost mid-sentence the way humans do. In fact, if you read the transcripts of human oral arguments, frequently they don’t make any sense when you see them on paper.

Adam also spends time addressing some of the objections you’re likely to hear from lawyers. The first is hallucinations, which is what lawyers are most concerned about with these tools. He acknowledges that AI does currently hallucinate – especially case citations. But because Adam’s focused on oral argument, he makes the case that oral argument is especially appropriate for AI.

The advocate isn’t supposed to bring up new cases outside of what’s in the court record, so you can ground the AI in the set of cases relevant to this specific oral argument. And the AI is pretty good at accurately reporting information from those documents. That’s one of the techniques we know is likely to mitigate hallucinations.

Then there are authenticity concerns. I loved this part of Adam’s Substack. Lawyers often argue that using AI in oral argument (or other aspects of lawyering) takes out the human element – the part that we want to remain in lawyering. But he actually dismisses this concern, arguing that judges don’t form authentic human connections at oral argument with humans, either. He questions whether we should want any human connections at all during oral argument. And I find this particularly persuasive – we shouldn’t want emotional or personal connections influencing the way judges rule. We should want judges calling balls and strikes based on what the law says, not on how they feel about the advocate or the people standing before them.

So Adam concludes that courts should allow AI at oral arguments, and that they should treat AI arguments exactly like human arguments. He thinks doing so will result in better advocacy – especially for weaker lawyers or pro se litigants. He believes it will create a more level playing field, particularly when both sides use AI. He also argues it will respect litigant autonomy. And he sees very little downside risk, since oral argument is less important to the outcome of a case than debriefing.

He suggests that a sort of low-stakes way to do this would be a pilot program allowing self-represented litigants to use AI at oral argument, with safeguards against hallucinations.

I just thought it was a brilliant piece by Adam (as always). Bridget, you found this and shared it with me, and I’d love to hear your thoughts about it – especially as somebody who was a judge sitting on an appellate court at the highest level (the Michigan Supreme Court). What do you think, from your years of experience?

Bridget McCormack: Yeah. So, first of all, I recommend everyone go read Adam’s Substack – and actually listen to the oral argument audio he generated. I mean, that in itself is just incredibly valuable to hear. And Adam is obviously being provocative for a reason – he’s making the case that, in fact, the large language model could do the job he did (or even better than the job he did), at least in the case of that off-the-wall question about the 21st Amendment. And of course the AI doesn’t make grammatical errors, doesn’t need time to think through an answer – it can just produce it immediately.

Now, I don’t think he’s seriously arguing that the actual U.S. Supreme Court should start allowing this experiment tomorrow. But by using his own case in that Court and showing us the evidence, I think he really does make a compelling argument that there are certain appellate dockets where people who can’t afford lawyers – and have no right to have one appointed for them – should have this option. Like, I don’t know how you could read Adam’s piece and not think it’s a good idea to at least run some experiments, right? Try it in some intermediate appellate courts, for example – state intermediate appellate courts – where there’s a category of cases that self-represented litigants generally just give up on, because there’s no way they can navigate the appellate rules and the appellate argument. It’s just not a fair fight.

And he’s completely right about the closed record. (I don’t know why I hadn’t thought about it in those terms before!) In an appellate court, the whole point of the argument is you stick to what’s already been given to the court. That’s a perfect environment for an LLM to perform well, because you’re not asking it to produce answers outside of the record or evidence it’s allowed to read and understand.

I used to make a similar argument in the context of student practice rules. You know, in many states, the state supreme court will let law students do the work of lawyers if they’re practicing under the supervision of a licensed lawyer. But when I started in Michigan years ago, the student practice rule let students practice in the trial courts but not in the appellate courts.

And I personally moved for an extension of that rule. My argument was: appellate advocacy, in some ways, is a much safer place for a law student to try out lawyering than a trial court. In a trial, having a witness suddenly say something you didn’t expect is terrifying if you’re a new lawyer (or even if you’re experienced!). If you’re in the middle of a trial or an evidentiary hearing and you get an answer you didn’t expect, that’s really hard to handle. Whereas oral argument – you can practice it 50 times, 100 times, as many times as you want. And now you can practice it ad infinitum with your LLM asking you questions. You can be pretty well prepared for any question that comes your way. So for the same reason, I think an LLM could perform pretty well – especially in a situation where a litigant didn’t have access to a lawyer.

We don’t know from Zoe’s post whether that litigant in California used AI to file the brief or just for the research and drafting. I’m assuming she used AI for the briefing and pleadings in the appellate court. I didn’t read any evidence of AI being used during oral argument in that case – and I think we would have heard about it if it had happened. So I suspect the AI assistance was just in the briefing. And many appellate cases are decided on the briefs, especially if the litigant doesn’t have a lawyer to present an oral argument – those cases almost always get decided on the papers. So why not let the litigant use AI for that, instead of nothing? Right?

So for all of those reasons, I think Adam makes a pretty compelling argument that appellate courts – especially busy intermediate state appellate courts – should find a few places where they can experiment with letting LLMs make arguments for self-represented litigants (when the litigants want that). There are lots of civil cases where people have to represent themselves, and maybe the trial courts get it right most of the time – I hope they do – but if there is a case where a party feels the trial court got it wrong, I think it would really build credibility to give them this option to bring their case to the appellate court.

There’s all kinds of data about how, if the process works – if people feel like they were heard and listened to, and they understand why they lost – then they accept bad results. And I just think this could grow confidence in our public justice system if appellate courts could find some places to experiment with it. It also would help address capacity issues. Often these cases are really hard for appellate courts, because it’s difficult to do the best job on a docket where people are trying to make legal arguments without having had legal training.
So I think there’s a win-win solution here for an appellate court that wants to give it a shot. It actually makes me kind of optimistic about where this could lead.

And it reminded me of this other case that was in the press a little bit in March. It was also a self-represented litigant arguing on appeal – I think it was an employment case in a New York appellate court. The specific court and issue don’t matter so much. But the litigant had put his argument into some kind of AI tool and produced a video, and he had gotten permission from the court to play this video in court instead of delivering an oral argument himself.

There was a misunderstanding, though – the panel of judges didn’t realize that the video was supposed to replace the litigant’s oral argument. The panel was upset by it, and they had the video shut down. The litigant felt embarrassed, and then he tried to proceed with a live oral argument, but he had some health issues that made it hard for him to argue effectively. It ended up being a bad experience all around. And, you know, I understand – the court didn’t know what it was getting into and wasn’t prepared. That’s not what I’m recommending here. I’m recommending that courts proactively find a lane where they can experiment with this for certain cases – like Adam suggests, frankly.

But it does feel like another place where AI could really offer solutions to some problems that have felt intractable. So I don’t know – that’s what I think. What were your takeaways, Jen?

Jen Leonard: No, I completely agree. And I feel like I’m always stating the obvious, but all the conversations we have on this podcast – the devil’s always in the details, and there’s so much to work out. But I keep coming back to the importance of a mindset shift that the entire profession needs to make. We’re so concerned about “What does this mean for the lawyers? What will lawyers do in the future? How will their roles change?” But why do lawyers and judges exist in the first place? It’s to help people navigate their rights and obligations to one another in a society that depends on us to uphold the rule of law. And we’re seeing all around us what happens when people no longer have confidence in the rule of law and the structures that underpin it.

I think – and I suspect you agree – the reason I’m fascinated and excited by this technology is that, for the first time in my lawyer lifetime, we have the opportunity to put something in the hands of everyone that allows them to navigate, to simplify, to customize, and to engage with the legal system more on their own terms – without, for once, worrying about what it means for us. And I find that exciting. When I start to see experiments like Adam’s – somebody who is a lawyer of the highest caliber – coming together with self-represented litigants who have no means at all to engage with the system, both figuring out how to use AI to solve problems… when I see those two kinds of people approaching this from opposite ends and coming to the same conclusion, that really excites me.

When you were talking about Adam’s experiment, I actually pulled up Zoe’s LinkedIn post. I want to quote directly from her, because I thought it was so powerful. She says, “I feel the benefits of intelligence democratization likely either match or outweigh the risks. In any event, I myself would be hard-pressed to justify withholding access and would instead continue to favor entrusting other humans to use tools and learn and behave in accordance with the social contract, just as we do in all other affairs, and generally always have.” End quote.

There are problems, to be sure – but I just wish the conversation could be more aligned with Zoe’s thinking and Adam’s thinking, and less about “How do we make sure this is comfortable for us as lawyers?”

Bridget McCormack: We’ve talked about this before, but that scenario would never happen in medicine. If AI figured out a cure for all cancer, the medical profession wouldn’t say, “Oh my God, that would be so bad for oncologists, right?

Jen Leonard: Let’s just pause until we make sure all the cancer doctors feel okay about their role here – before we save lives.” (Laughs.)

Bridget McCormack: And what’s the difference? I mean, I don’t get it, quite – I think what lawyers do is just as important. It’s critical to a functioning society that people have confidence in the rule of law. And I don’t think anyone has a bigger role to play in that than lawyers (and judges, of course). But I see these as good news stories – even if they’re forcing us to think more quickly about what the future might look like with AI than we normally would be comfortable with in the legal profession, which is – for all good reasons – slow and methodical and careful.

Jen Leonard: I also think that, in my own work with lawyers and law firms and others in the profession, Adam and Zoe and people like them are pushing me to help lawyers think big first and then work backwards. For a long while – you know, over at Wharton they talk about this “innovation winter” we were in pre–gen AI – we were all using the same techniques and not getting big gains. The mindset was always “start small, pilot, grow incrementally.” But now we’re entering a new phase where you can start big – you can expand your mind about what might be possible, and then figure out how to adapt it to where you are. And I think that’s where you’re going to find the big wins, as opposed to the way we’ve been approaching it.

Bridget McCormack: Yeah, I think that’s where you see the big breakthroughs – like whole new ways of doing business.

Part of me couldn’t help thinking, in the case of that California rent appeal win, how much time and heartache (and probably worse than heartache) would have been saved if the trial court had had a large language model to help it in the first instance. Because between the time the trial court ruled against that tenant and when she won on appeal, I guarantee you it was a couple of years at least. Who knows what happened to her in the meantime, right? So the “think bigger” part of me was like, this is really exciting – let’s give this technology to the trial courts, where judges are trying to juggle so many cases with so many litigants (many of whom will never be able to afford lawyers). Lawyers are never going to be the solution to the crowded state court dockets. So maybe sooner rather than later, we figure out some solutions for trial courts.

Jen Leonard: Well, this has been an encouraging episode, I feel. Some big early wins and some very cool experiments – and different parts of the profession coming together. I loved this conversation, and I’m grateful to you, as always, Bridget, for alerting me to some really interesting things happening all around the field.

Bridget McCormack: I was actually participating remotely in a conversation that Dan Linna at Northwestern (who I know you know) was hosting – it was Dan plus a member of the computer science faculty at Northwestern, with a bunch of judges and some court administrators. There were federal judges and state judges from different jurisdictions – some were there in person, some online like I was – and I was encouraged by how open and curious that audience was. It was a small sample size, obviously, not representative of the whole world of judges – but there are certainly judges out there who are feeling the pressure to deliver better on what they took an oath to deliver. And if they think this technology might help them do that, they’re interested. That’s exciting, I think.

Adam is a great voice for helping people get comfortable – because like you said, he’s sort of the fanciest of fancy lawyers.(Laughing) I mean, you phrased it a little more professionally than I did. But the fanciest of fancy lawyers says, like, “Hey, I just ran the experiment, and it did it as well as I could do.” That’s gold, in a way. That’s giving the profession’s giants an opportunity – like, let’s figure out how we might use this to solve some big problems.

Jen Leonard: And to put a really fine point on what you’re saying: if you’re out there and you’re a judge, or a law professor, or a law firm partner – or anybody who feels like you’re carrying the torch alone on this but you’re excited – find Adam’s Substack and send it around to people. Because he is the fanciest of fancy lawyers, and he’s a persuasive voice. I cite him all the time to try to get people excited about this and to persuade them.

Well, thank you. As always, it was great to see you, and great to be with everybody out there. We look forward to seeing you on the next episode of AI and the Future of Law. In the meantime, stay well.