In this episode of AI and the Future of Law, hosts Jen Leonard and Bridget McCormack examine how generative AI is entering state courts, using Pennsylvania’s interim policy as a real‑world case study. They outline what the policy permits including document summarization, preliminary legal research, first‑draft communications, readability edits, and public chatbots—and the guardrails it requires: human review, confidentiality, and leadership oversight.
They also compare practical workflows from their own work, from using Claude to edit a book to turning long‑form writing into slides with ChatGPT, Claude, and Gemini, and they unpack the “AI pilots are failing” narrative versus on‑the‑ground adoption and ROI realities.
Finally, the hosts explore state court “AI labs,” enterprise access, and judge‑focused guidelines—plus why pilots like the Michigan Supreme Court’s partnership with Learned Hand deserve attention.
Key Takeaways
- AI as a Partner: Practical, low‑risk court uses with required human review.
- Guardrails That Matter: Confidentiality, leadership oversight, and approved tools.
- Pilots vs. ROI: High adoption can precede measurable P&L impact.
- Build Court Labs: Enterprise access and shared learning accelerate safe use.
- Judicial Readiness: Practical guidance for judges and a look at Michigan’s pilot.
Final Thoughts
Generative AI is a general‑purpose technology that will permeate legal work; interim policies are training wheels, not permanent rules. The smart path is ethics‑anchored experimentation: start small, measure outcomes, and keep humans accountable. If you serve courts, clients, or the public, now is the time to learn and lead—set guardrails, pilot responsibly, and help shape how justice systems use AI to expand access and accuracy.
Transcript
Jen Leonard: Hi, everyone, and welcome to the newest episode of AI and the Future of Law. I'm your co-host, Jen Leonard, founder of Creative Lawyers, here as always with the wonderful Bridget McCormack, president and CEO of the American Arbitration Association. Hi, Bridget.
Bridget McCormack: Hi, Jen. It's great to see you. I'm excited about a lot of stuff we have coming up together this fall. It feels like summer's over and we're back at it.
Jen Leonard: Absolutely. And it may be fall, the leaves may be turning but AI never slows down. We're just turning the page and accelerating right into the autumn. As we always do, we've structured our podcast with three segments: our AI Aha! segment (the things we've been using AI for that we find particularly interesting), our "What Just Happened?" segment (something happening in the broader tech landscape that we try to bring to a legal audience and explain its relevance for lawyers and legal professionals), and then our main topic, which is something happening that relates to AI in the legal industry.
Today we are going to talk about a recent interim order from the Pennsylvania Supreme Court related to how judges and court personnel are now allowed to use AI in that court. But before we dive into all of that, we are going to do our AI Aha!'s—and I'm going to do mine first this week, because I'm not going second anymore. At least not this episode, because I'm always embarrassed because mine are so simple, so I'm going to go with my simple one first, and then you can blow us all away with your incredible way that you're using AI.
AI Aha! Moments
Jen Leonard: My AI Aha is... you know, I'm working on this writing project, and I've been using Claude to help edit my work. I don't write that frequently—I speak more than I write, and I enjoy speaking more than I enjoy writing—but it's been a great editor for me. I've been uploading different chapters that I've been writing and asking for feedback, which I find to be really helpful and also much easier for someone who doesn't enjoy writing than getting human feedback. I have a human editor as well who's lovely, but it's much less humiliating, I guess, to get feedback from a chatbot.
What I found interesting was that the feedback across the different chapters really reflected my confidence in the chapters themselves. The feedback that Claude gave me was unsurprising in some ways, but it surprised me by noting that the chapters where I really knew a lot more about the content came through as much more helpful, useful, passionate, and inspired than the chapters where I had educated myself more recently. Claude sort of told me that it sounded like I was trying to come across as an expert, and that I sounded more like I was trying to be an academic and less human-like.
Then we talked back and forth about whether I should even keep those chapters, and it helped me think through how to restructure the things that I knew well. It was really helpful because I was trying to do everything, and it helped me sort of focus on the things that I know how to do. I thought it was really interesting—I didn't expect it to capture that so well. So that was how I was using it.
Bridget McCormack: That's super interesting. And actually, that's the kind of feedback that you can really hear and respond to well, right? And I think I'm with you—I’d rather hear it from my chatbot friends than from a human. Are you just uploading the chapter and then asking it to give you narrative feedback, or are you asking it to do line editing, or both?
Jen Leonard: Just narrative. It's more like asking, “How do these chapters work together at this point?” Because I knew they were choppy in their current form. So it's like an early read, like: “Do these ideas make any sense together?”
I sort of had a feeling when writing them that some of them felt better to write than others, and it definitely picked up on those. But I think the vision I had for the entire project was more comprehensive, and I felt like it had to be. Then Claude sort of helped me see that it didn't have to be. And you'll appreciate this, because it obviously remembered this from other conversations and was flattering me a little too much. It said, "Clayton Christensen didn't write 'The Innovator's Dilemma' about a business's entire problem. It only wrote about how to innovate and how to respond to disruption. So you don't need to write about a law firm's entire business model. You can focus on these three or four things that you know a lot about, and not these two or three other things that it feels like you don't know as much about," which was entirely spot-on feedback.
That's how it felt when I was writing it, but I felt like I had to include those couple of topics that I was learning about. So, I don't know—it just felt better to receive the feedback, but it also felt highly personalized, because it brought in that detail of something I've clearly talked about with it in the past through the Clayton Christensen reference.
Bridget McCormack: Yeah. That's amazing. That's pretty cool. Well, mine is actually not that interesting, and I wish you'd stop building up my AI Aha's—because, as you know, low expectations for life, you know? Like I always tell my kids. So I don't think mine is that impressive at all.
But, as you know, I will ask the same questions to all three that I use all the time—Claude, Gemini, and ChatGPT—on legal questions where there is a clear answer that I know something about, so that I can better assess differences in what they're giving me. I'll also sometimes ask more strategic questions, because I feel like I can at least assess how thorough the answers are, whether they're thinking of things I wouldn't have.
A couple of weeks ago, I was talking to you about this—I think on the podcast—about how I was using all of them to help me quantify errors that human judges make. (For lots of reasons—not because they're trying to cut corners, but because they're human beings in an overburdened system.) Quantifying how regularly those errors occur and how error is kind of built into the system... that's what appellate courts are for.
I was writing this essay (which I finally did finish, right). It's not out anywhere yet, but hopefully it will be one day. But I wanted to take that and turn it into a few slides. You know I'm presenting a lot, and at the AAA we're building this AI-native arbitrator—The question about hallucinations actually has a really good technological answer in our case, because, as you know, you can train your model on a closed set of information, and you can actually reduce or even eliminate hallucinations. (More specifically, I just want the conversation to expand a little bit to: As opposed to what?)
So I took the essay, fed it into all three chatbots, and I said: "Turn this into as many slides as you think I need to tell the story I've told in this essay, in slides, if I'm presenting." I got really different outputs, and I don't know if I'm missing some functionality on Gemini. Gemini did a very good job with the substance, but it wanted to just give me Google Sheets with what each slide would contain. And I was like, No, I'm asking you to actually make the slides for me—just make the slides so I can copy and drop them into my deck where I'm talking about all this other stuff. It just really didn't do a good job with the design; it did a decent job with the content.
ChatGPT did the best job. (I'm using GPT-5.0 Pro, the best model available.) It did a brilliant job with the content—I felt like it picked out the best parts and organized them in a way that told the story I was trying to tell in three slides. But the design was just kind of boring.
Bridget McCormack: Claude's design was far and away the best. I saw it doing all this coding before it showed me what it produced (and none of that means anything to me). But then it produced really beautiful colors and animations—I mean, they were pretty great slides. But the substance was not as good.
Bridget McCormack: So I went back to ChatGPT and I said, "Look, I really like your substance the best—you're in a contest. But can you amp it up a little bit? Give it a glow-up; give me a bit more in the design and creativity. Make it interesting for someone to look at. Nobody wants to look at these boring-ass black words on a page." And then it did a pretty good job. It was a reminder that sometimes you have to be specific not only about the substance of what you're looking for, but also about how you want it delivered to you.
So anyway, PowerPoint right now—if you tell it to be creative and make it visually interesting—it's doing a pretty good job.
Jen Leonard: Did you tell it "You're in a competition"? I wonder if that matters—I mean, I know there's all this research now that it doesn't really matter if you say stuff like that, but I wonder.
Bridget McCormack: That's what the research says—it always still acts like it cares, so I don't know. I still don't think we really understand how they all work, so I try everything.
What Just Happened?
Bridget McCormack: All right, let's move to "What Just Happened?" In our "What Just Happened?" segment today, we're going to address an MIT report that was covered in Fortune, I believe, a few weeks ago. The headline was that MIT did a survey over the course of about six months on over 300 AI initiatives that different businesses or organizations were running pilots for. They did interviews with representatives from the organizations, and they did surveys.
The sort of top-line finding was that they're mostly failing—or at least that's what the headlines reported, I think. The more specific finding was that very few of those organizations were seeing a real (P&L) palpable effect as a result of their AI pilots; a very small percentage of them were seeing real ROI. (If you measure ROI only in your bottom line, right?) They also reported high adoption rates: people are using it, people are using ChatGPT, using Copilot. But they just weren't seeing improvements across their organizations in a way that showed a difference in their bottom line or their revenue.
If you actually read the report (and not just the headlines or the people talking about it on LinkedIn), there was a lot more detail about what they really found and what they didn't, and where the gaps come from—who was finding a way to actually get real value out of their experiments. There was then lots of pushback and critiques of the study itself—like who was actually interviewed, how scientific it really was, what to make of it.
It does feel to me like we have a point every summer (kind of late every summer) where there's this one article—last summer it was the Wall Street Journal piece—where the report is like, "It's not really happening. AI is, you know, kind of failing, and the bubble is about to burst." And that's what this kind of felt like to me, at least the swirl around it.
I don't want to put that on the MIT researchers—I don't think that's fair. I think they just reported what they found, but that was the swirl around it, which was then belied by the Oracle earnings report, which was mind-bending.
I'm curious what you thought of it—what you thought of the swirl around it—and whether it resonates with what you're seeing. You're working regularly with all kinds of legal organizations that are running pilots at this point. Some of those pilots you helped them design, I think, and others they designed on their own. Tell me, what were your reactions to the report and the swirl around the report, and how it aligns with what you're seeing in the market.
Jen Leonard: Yeah. I mean, the underlying discussion in the report really resonated with me and with what we see on the ground. The headline takeaway—well, the disconnect—is if the takeaway is "95% of pilots are failing, and therefore AI is overhyped and we should just abandon all efforts to integrate AI in our organizations." That's the part that's getting all the attention, and it's the part with which I would disagree.
When you dig into the report and it talks about the high levels of adoption—particularly of things like ChatGPT or Copilot—that's definitely what we see on the ground. I was at a retreat the other day where I asked a roomful of lawyers how many people are using ChatGPT on a daily basis, and almost everybody's hand went up. That's very different from even in the spring. The adoption is way up.
People (meaning law firms or corporate legal departments) are not seeing ROI yet in terms of growth and revenue savings to their bottom line. But I think that makes complete sense, because, you know, if we've followed Ethan Mollick's work or others' work... I know Andrej Karpathy, in that great presentation he did early in the summer, talks about how unusual this technology is in the way it reached us. Most advanced technologies reach us because they're first developed at a really high level—at the government, like at NASA—and then they're deployed in corporate environments, and then they reach individuals. But this has been deployed in the exact opposite direction: from individuals, then possibly up through corporations, and then maybe through the government and geopolitically. So it's not surprising to me that we're all experimenting with it at the individual level, and then we'll figure out what it means for our organizations.
I also think—I take the position (and I know we've spoken with people who feel differently)—that this is a very different type of technology. It requires a more organic way of engaging with it than other types of technology. And overly structured pilot projects, in my opinion, are not necessarily the right approach to learning how to work well with gen AI. I like to just experiment and tinker with it. And if I were asked to adhere to a specific rubric, I would probably ignore that and just go off and surf on my own to figure out how to use it however I wanted. I think that's probably more of what's reflected in the report: people going off-grid and using the tech in ways that make sense to them.
And that's also what we see within firms: a tax attorney is going to use it completely differently than a real estate paralegal, who is going to use it completely differently than a marketing professional. So it's going to take a lot of time for that to shake out and start making sense at an organizational level. But that doesn't mean the transformation is not happening. Does that align with how you think about the transition underway?
Bridget McCormack: Yeah, it does. As you know, my teams have all been building all kinds of workflow improvements, new products, and new services with the technology. We have a bit of a let's try it all approach—in large part because we can learn what the technology can do and what it can't do just by trying to build some of the good ideas that people have. Also because people across the business who sit in different seats have a better sense of how the technology might make a difference for their workflows. Like, I'm just not going to know how the finance team might benefit from it, or how the marketing team can improve what they do or their reach with it—they can, but I can't.
I've said this before: we've had a few things we've built that really didn't work out. Sometimes it was because of the jagged edge problem—the technology just wasn't going to work for that particular use case. Sometimes it was more work to put the solution together than the parties appreciated, and it just didn't really produce something that people wanted. Other use cases that were wildly successful... I haven't translated all of them into P&L success, because that's not how we're thinking about it. We're thinking that we need to learn all the ways we can do what we do better and serve more people with the technology.
We do put some measurements around some of what we're seeing, because I think it's useful to be able to talk about it that way—you know, how much time is saved when you give people a faster, more accurate way to produce a timeline across a set of pleadings, for example, or when case managers can use the arbitrator search tool with natural language instead of trying to manipulate the old-school system. But I think a lot of what we're building will show up in success rates (defined a bit more broadly than just P&L). We're a nonprofit with a mission to serve, but it's going to be a while before we can disaggregate all the different things going on and figure out where to credit the technology versus the people empowered by the technology, or both.
Jen Leonard: And if you think in the law firm context, when you have timekeepers, one of the biggest challenges is just people finding the time to figure out—even if they're experimenting or using general generative AI tools like ChatGPT or Copilot—they're doing that in their spare time, which is minimal because they're billing during the rest of their time. So one of the challenges in a law firm context is creating billable incentives. Like, if you're giving out billable hours for people to be experimenting, that takes away from your revenue. You have to be willing to invest that time by taking a hit to your revenue, and recognize that ultimately you might be taking a bigger hit to your revenue if you find efficiencies through what you're learning in AI. Then you're going to have to reshape your pricing models, which is what we're seeing in conversations with law firms.
They realize—I mean, they know this, but they're starting to feel it—that they've got these dual challenges now. People are starting to adopt it (and we want them to adopt it, because we have to maintain competitiveness in this environment). We have to find ways to carve out the time, which doesn't align with our business model. And then when we carve out the time and we're successful at it, that's going to carve out revenue from our existing models. So we also need to find time to figure out a new model and how we price that.
So in our industry—in the private sector part of our industry—it's even more complicated than the MIT study, because the revenue is going to go in the opposite direction when you start finding gains in efficiency. All the findings align with what we're seeing on the ground in terms of change management needing to happen, and then it's even more complicated once the change management actually begins to happen.
Bridget McCormack: Yeah. Not to mention the technology is a moving target, right? You build these systems on the latest model, and then there's a new model. And you have to have a robust way of auditing what you've built. My chief technology officer talks about this all the time with what we're building: like, What's our audit process going to be, and what are we going to do if OpenAI changes the model and it affects some of what our tool is supposed to be doing? Are we going to shut it down and put up a big notice to everybody, like "we're on hold for a little while"? I mean, there are so many complicated pieces to this change-management moment that make it harder, I think.
Jen Leonard: But one thing I'm starting to see—by mid-2025 this started happening, and you and I have seen it in presentations and headlines—is that I'm starting to see it in conversations: lawyers are starting to see the possibilities. In one-on-one conversations with lawyers, I'll hear: "Oh, the things that I used to do on paper that I could never find as a litigator... I'm now able to upload all these documents, and it's like this sorcerer can almost bring them to life. And I can now do things with information that I could never do before. And that's like a whole new world for me." Or, you know, patent attorneys finding things in a patent application that the review of five other attorneys never would have found—that is a material difference. So those light bulbs are starting to go off.
Jen Leonard: But like you said, it's all going to take time. And it seems like the media wants to find a reason why it's all a house of cards. I think the difference with other bubbles is that people actually are using this, and they are finding amazing, breathtaking cases in their own lives that show the promise of what's ahead.
Bridget McCormack: Yeah, I think that's right.
Jen Leonard: So keep plugging along with your experiments. The models are getting stronger, and we'll keep seeing more use cases. And change takes time. Human change takes longer than tech change takes.
Main Topic: Generative AI in State Courts
Jen Leonard: That brings us to our main topic in our industry, which takes even longer to change than some other industries. We saw an interim policy come out of the Pennsylvania Supreme Court just in the last couple of weeks about the use of gen AI. So the Pennsylvania Supreme Court (which is the jurisdiction where I'm licensed) now allows the use of approved generative AI tools for certain defined, low-risk tasks, as long as there is human review, strict confidentiality, and leadership oversight.
We're not going to go into a deep dive on the specifics of the policy, but essentially, with an approved tool under this interim policy, judges and court personnel can use gen AI in certain cases to summarize documents and do preliminary legal research (as long as the LLMs are trained on comprehensive, up-to-date, reputable legal authorities); draft initial versions of some types of communications or memoranda; edit or assess readability of public documents; and in some cases provide chatbots or similar services to the public and self-represented litigants. All of this comes with the caveat that no nonpublic information can be shared with non-secured systems, everything has to be reviewed by humans, and there are all sorts of safeguards in the policy.
You can read it online if you're interested in more detail. But we really wanted to open up a conversation about state court judges, because we have the benefit of the former Chief Justice of the Michigan Supreme Court as the co-host of this podcast—Bridget.
I've heard from a few state court judges who listen to our podcast, who have been experimenting with large language models on their own and have been really fascinated by their capabilities and are self-educating. So I saw this headline come out and thought maybe I could interview you for a few minutes about your thoughts on state courts, and kick it off by asking what you think would be most useful and helpful for state court judges as they think through how to apply AI in their work. Imagine if you were still on the bench when this technology came along—how would you be thinking about integrating AI into your work as a judge?
Bridget McCormack: I think it's an exciting technology for state court judges. And let me fan out for a minute. Ninety-seven percent of criminal cases and civil cases are adjudicated in state courts, not federal courts. So they're buried in work—let's just start with that. They usually don't have the funding and support (and therefore the staffing and help) that federal judges have. And yet, you know, the people who appear before them have really important problems that they're looking for help to solve. And they might literally be problems that make a difference in the rest of their lives. So figuring out how to meet the demand that the public makes on state courts has been a challenge for many decades now.
Courts were sort of designed for a different legal infrastructure than the one we now live in, where most people are coming to courts with civil justice problems without lawyers—most people. And judges still have to make sure they hear those folks and help them solve those problems. So I think this technology is, hands down, a very useful one for overburdened state courts.
To the extent, however, that courts need an engineering team or at least a chief technology officer who can make sure they have access to tools that they can use safely and feel confident about—that's going to be tricky. I think there are funding issues, and state courts rely on another branch of government for funding. They pass their budgets once a year; they can't just knock on the door next week and say, "Hey, there are these new large language models that would really help us. Can you give us some budget for enterprise licenses?" But that's what I would want to do if resources were not an issue. I think you want to get judges across your court—led by the Supreme Court and with representatives from courts in every part of the state and on every docket—to be like your lab. Give them access first to enterprise licenses, maybe across a few models, and let them start figuring out where in their workflows it might make a difference. And not just judges—you know, court staff are often the folks that have to answer questions for people navigating civil justice problems without lawyers, and giving them the technology to figure out how they might be able to clone themselves (which you can do with these tools) is another game changer.
You and I have talked about this before: there are some state courts building pretty cool chatbots for self-represented litigants. Some did it really early—Maricopa County, Arizona did one back when we were still at GPT-3.5. But many state courts have done that now, and those are turning out to be excellent tools for state courts.
I think people think of judges' work as written opinions and orders—and that is one thing judges do. They also, however, have to issue many, many rulings and orders orally from the bench, because they simply don't have time to do written orders or opinions in the thousands of cases they handle. So you can certainly imagine how the technology might be helpful with sorting out some of the things that human beings just can't do in a day. Again, I think you need a lab so they can start playing with the technology and figuring out where it might make a difference. And then some pilots that they can launch in different places.
The Michigan Supreme Court announced a few weeks ago (before this order from the Pennsylvania Supreme Court) that they were starting a pilot with Learned Hand, which is a startup that's building AI for courts—generative AI for courts in particular. So I guess the Michigan Supreme Court is taking the plunge and trying to figure out how they can use the technology to make a difference.
I think the only mistake courts can make is not figuring out what it's going to mean for how they can serve the public better. And that means just not engaging at all.
One other thing I learned recently: you know our friend Judge Scott Schlegel, who's now on the Louisiana Court of Appeals, and when he was on the trial court in Louisiana he was widely viewed as one of the most innovative judges in using technology to serve the people he served. He built a lot of tech solutions on his own for his local court, and it was pretty great. He even won the National Center for State Courts award for the state court judge doing the most innovative things one year, and he deserved it.
He's recently drafted a set of guidelines for judges who want to figure out how to use this technology to produce better results. So that's exciting too. I think we're seeing things bubbling up where courts are trying to figure out how this can be useful. And that's great, because I think the sky's the limit.
Jen Leonard: So it sounds like maybe one of the biggest limiting factors—echoing the law firm example—is time: having the time for judges and court administrators to connect with one another, to sit, and share, and experiment.
Bridget McCormack: Yeah. I mean, that's a problem across our profession, right? It's even a problem—as you know, my team started early. We gave everyone enterprise licenses and asked them to meet regularly and talk about how they were using it (same for our panelists and the same for our engineers). And, you know, these are all busy people that we're asking to get on and meet and talk about it and upload use cases—it means an hour of work that they're going to have to fit in some other time. So figuring out how to give them some space for it has to be a priority. And it's hard when you work in a government-budgeted environment. It's very hard.
Jen Leonard: Do you think in the long term that the types of policies governing gen AI usage will feel more like training wheels that come off eventually, as the technology becomes more diffused into everything we do? So that we're not constantly updating these policies as much.
Bridget McCormack: I do. I think this is so fascinating. I wish I had captured a snapshot of all of this as it was happening, because in the beginning we had all these orders—some from individual judges, some from courts—about lawyers' use of the technology. Some of them were like, "You can't use it." And those mostly got rolled back (quietly and quickly rolled back), because you can't tell people they can't use it—they're all using it. I mean, it's in your Grammarly, it's in your email; Copilot's all over your life. So we're all using it.
I think many courts—some publicly—just withdrew their orders and said, "Turns out the rules of ethics already tell lawyers what they need to know, right? You have a duty of confidentiality. You have a duty of competence. And all of those duties apply even when there's a new technology that you can use in your practice. You don't get to go ethically naked just because there's a fun new toy in your office." Like, you still have to put your ethics clothes on every day, no matter what the technology lets you do.
And I feel like now we're starting to see... I mean, I credit the Pennsylvania Supreme Court with just dipping their toes in—like, Let's tell the world this is something even courts are going to be doing. And I think that's great. I wonder if it was in response to—there were these two reports (both involving federal judges) of judges issuing opinions with hallucinations in them that got tons of coverage. I feel like the lawyers went first (remember that one guy in The New York Times for like three years? That poor guy), and now it's the judges, because they've made the mistake of not checking the work. I worry a little bit that that's why we're seeing this, like, "Oh, you can only use tools that have been approved by XYZ." I don't know how that works; some of that doesn't make sense.
I get that we're struggling in the moment with making sure the public can have confidence in what courts do, and I applaud the effort. But I do think it's just a snapshot in time, and probably six months from now, a year from now, two years from now, we'll look back on interim guidance like this and think, "Oh, we had no idea..." We were just in this liminal period between the before-times and the after-times, when generative AI is just built into all of our workflows.
We still carry our same ethical duties, and judges have those too—they don't change just because you have new technology. But you can rely on those guardrails; they'll still guide you. They'll still help you decide what the technology is really useful for, how to use it to get the most out of it, and where you need to be cautious and careful.
This was an issue with social media way back in the day. Believe it or not, there was a time when (maybe it's still true for many judges) judges were like, "I don't know if we can use social media." Some were all over social media, and others were like, "I don't know if we can." There were bar ethics committees and judicial ethics committees issuing all these rules. And they all basically said, "The code of conduct still applies, even if you're on Facebook Live." I would talk to judges about social media and they'd say, "Oh yeah, I don't know why you could be confident about it, because whatever you say, someone could post it to the world." And I would respond, "Well, what were you saying before people were posting it? Just don't say those things. The rules will work even with new technology." I think that's probably where we'll land.
If we ask ChatGPT in two years to pull every judicial order about generative AI and then build a timeline of what was prohibited when and what was permitted when... that's actually a fun project. I'm definitely going to do that before our next podcast to see where we are now. But I do think when we get through this phase, we won't need these kinds of interim orders or guidance. Right now, we're riding in the horseless carriage shaped like a horse. So here we are.
Jen Leonard: Definitely. And governments and courts are always, like you said, trying to strike so many different balances—the public confidence. But I also always worry about attracting really great talent to work in public service–oriented roles. I'm hearing more and more about people who are moving away from organizations where... I was talking with somebody the other day who moved roles because of their AI policies, and it's like it feels like you're stepping back in time when you walk in the building, because you're living in two different decades, almost.
So I think, like you, that it will naturally evolve—because it'll be like not being able to use the internet or a cell phone anymore. And of course we have COVID in the mix too. It's just that this evolution is happening faster than the ones in the past.
Bridget McCormack: Yeah. And I think bigger, right? Because it's not really just an email upgrade or the cloud. It's not just a new technology; it's a general-purpose technology. I didn't live through electricity, but apparently that was a big deal—like, a lot changed.
Jen Leonard: Well, thanks so much for sharing your thoughts, Bridget. Definitely more to come. It's not the last time we will talk about state courts and generative AI, because there's so much opportunity here to think in new ways and expand access to legal services. I just have to say, and maybe we can talk about this on a future episode—this is not based in reality, but have you read the new novel 'Culpability' yet?
Bridget McCormack: No. Do I need to?
Jen Leonard: I just started it. It's a fun one, a quick read. It's being pitched as the first AI novel (which obviously it's not), but it involves a family of five who are in a fatal car collision, and they're driving in a car that has a driverless AI system. The question is: who's at fault when that car collides with another car? A minor character is the Dean of Penn Law School, so it's all lawyers involved. And they talk about whether AI is a tool or whether AI has its own agency once you can't see under the hood and it makes its own decisions.