AI and the Future of Law Podcast– Ep. 28: Redefining Legal Training with AI

 

In this episode of AI and the Future of Law, Jen Leonard and Bridget McCormack confront a growing reality: the traditional apprenticeship model that shaped generations of lawyers is no longer enough. Drawing on insights from Wharton professor Ethan Mollick, the hosts explore how generative AI is upending legal education and professional training—and why the next generation of lawyers will need more than just billable hours to build expertise. From custom GPTs designed to simulate arbitration practice to science-backed coaching models that mirror how we train athletes and musicians, Jen and Bridget examine how AI is not just a tool but a catalyst for rethinking how lawyers are made. Along the way, they unpack Harvey AI’s headline-making partnerships with LexisNexis and iManage, and what these moves could mean for the future of legal work.

Episode Summary

In Episode 28 of AI and the Future of Law, Jen Leonard and Bridget McCormack explore what it means to train lawyers in an era when the traditional apprenticeship model is breaking down. Prompted by a thought-provoking interview with Wharton’s Ethan Mollick, the hosts dive deep into how AI is transforming legal education, professional development, and associate training and why we may need a whole new playbook.

From building custom GPTs for arbitration learners to experimenting with science-backed coaching models, the conversation maps out new possibilities for what it means to become a lawyer today. Along the way, Jen and Bridget share their personal “AI Aha!” moments and unpack the strategic implications of Harvey AI’s new partnerships with LexisNexis and iManage.

Key Takeaways

Rethinking Legal Training: Inspired by Mollick’s assertion that “the apprenticeship model broke this summer,” the hosts discuss how law firms and legal educators must now build more intentional, scalable ways to train junior lawyers, combining AI-powered practice, coaching, and feedback loops.

Custom GPTs as Coaches: Bridget shares her experience prototyping a GPT to simulate arbitration training scenarios, showing how these tools can help learners build strategic thinking and decision-making skills outside the traditional classroom.

Deep Research + Targeted Teaching: Jen experiments with ChatGPT’s updated deep research tool, highlighting how faster, smarter citation checking can reduce time spent on verification. Bridget envisions GPTs that act as personalized coaches for learning legal skills, much like structured swim and music training for kids.

Harvey’s Big Moves: The hosts break down Harvey AI’s new integrations with LexisNexis and iManage and what they mean for mainstream adoption in legal workflows. Is this the moment generative AI finally becomes indispensable for legal research, drafting, and knowledge management?

Equity & Access in the Age of AI: While large firms are poised to benefit from these integrations, the hosts reflect on the risk of widening gaps in access and offer ideas for how AI-powered tools could also support legal services, courts, and pro bono innovation.

Transcript

Introduction

Jen Leonard: Welcome, everybody. We are so excited to be here in the first episode of our podcast as part of our new collaboration with Practising Law Institute. In case you missed our recent mini episode, the show is now rebranded with a shorter, catchier title, AI and the Future of Law. (It was formerly 2030 Vision: AI and the Future of Law.) Everything is pretty much still the same – it's still Bridget and me, it's still every other week, and it's still focused on how AI is transforming the legal profession. We have a few small changes, like a new look and feel, some new artwork, and new branding supported by the Practising Law Institute and the American Arbitration Association.

We're also really excited to expand our conversation formats. Even though Bridget and I could riff on AI all day, every day (and frequently do), we’d love to bring in some new expertise and perspectives so that we can learn more and help everybody else learn more. So we’ll be bringing in some interviews with leaders in AI, legal education, and practice to get a sense of what’s actually happening out there – how are lawyers using this technology? How is the technology advancing? We encourage you to follow the show. If you like the show, subscribe to it and leave a review. It helps the algorithms pick it up and share it, and we’d love if you share it with legal professionals you know who might find it valuable. We’d love to spread the word and get more people talking about AI and learning with us.

Just so you know, if you’re a new listener (or if you’ve listened for a while), we generally break our show into three main segments. We start with an AI Aha! – something that Bridget and I have been using AI for since our last conversation that we find particularly interesting. Then we connect the dots with what’s happening in the broader tech landscape for lawyers and legal professionals (for those who might be too busy to focus on those things) so we can all keep pace together in our What Just Happened segment. And then finally, we have a main topic where we focus on something happening in the legal industry related to AI. So we’ll start today with our AI Aha!, and I’ll turn to Bridget to share what she’s been using AI for in her life.

AI Aha Moments: How Custom GPTs and Deep Research Are Rewriting Legal Learning

Bridget McCormack: As you know, I always have a hard time picking what to talk about. But I decided to talk about something I’ve just been playing with on my own. My team will be able to do a much better job of this, but after our last episode – when we were talking about how ChatGPT was all grown up and scoring A-pluses on law school exams, and you and I were puzzling about what that would mean for legal education and assessment – we wondered: what’s a better way to really communicate legal knowledge? What do law students need to know?

We do a lot of training at the AAA (American Arbitration Association). We train people who want to become arbitrators, and we’re now moving into some advocacy training. Our training is pretty hands-on – it reminds me a lot of the clinical work that I did in law schools, which I think is a pretty important part of legal education. But it seemed to me that we now have all these new tools at our disposal, and we could probably use them in ways that would really enhance that education. So I started working with the initial course. I took the Arbitration Fundamentals course myself recently, just because I wanted to know what the course content was.

I started playing with building a custom GPT for one of the modules covered in that course. The course is live, with a hypothetical case that the learners follow from beginning to end, stopping and starting in different modules and working in teams. The custom GPT build has been really enlightening and exciting because, as you know, I start with one idea and it suggests like four other ideas for different directions I could take it in and different tools I could add to the experience to enhance it. Because I’m just playing with it to see what’s possible (again, my team will do a much better job of this), it’s been just a blast.

I’ve created some personas, and they’re giving the learner (a new arbitrator) some challenges. It’s been really a lot of fun. I don’t know what will come of it exactly, but I’m excited about what we could build in terms of our education products at the AAA.

Jen Leonard: Oh, that’s so cool. I love that. My AI Aha! is not really that interesting, I guess. But you know, we talk a lot about emerging capabilities. If lawyers are listening and they’re not familiar with that concept, it’s the idea that AI develops the ability to do new things that it couldn’t do before. I think part of the challenge of working with lawyers is they get stuck on the idea that it can only do this set of things, when it’s constantly developing new capabilities.

I’m working on a research project, and I was using ChatGPT’s Deep Research function. It reminds me of working with students (which I used to do a lot). You prompt it to do a research report, it follows up with a few questions, then it goes off and scours the internet and delivers a really nice research report with all of these citations. Before, I’m pretty sure that when I clicked the citations, it would just take me to the website. But now the citations are inline in the report (maybe they always were, I’m not sure), and when I click one now, it actually pulls up the website and highlights where it found the information. That is amazing to me, because before I was clicking the URL to see if it was hallucinating and having to do a lot more fact-checking. Now the fact-checking is a lot faster. So, it’s a weird technology (as we always talk about), but it’s great when you find these new things it can do that make things even faster. So that was my Aha! this week.

Bridget McCormack: I don’t think I’ve noticed that feature yet. Yeah, it must have rolled out in the last week or so – maybe I haven’t done a Deep Research report recently. That’s amazing! You can lose a lot of time having to, like, scour a big website to confirm something. That’s really, really exciting.

Jen Leonard: Well, it’s also sort of like, you start thinking about what are the new jobs that are going to be created and what are the new tasks people are going to do. And for a while I thought, well, fact-checking is going to be the next thing that people will move to. And maybe that’s still true, but a week ago I thought fact-checking was something different than it’s already become. Now it’s a lot faster. You don’t necessarily have to go in and look through the entire article. It’s going to show you where it is—and then that’s faster. I don’t know. So that was my Aha! this week.

Bridget McCormack: That’s very cool.

What Just Happened: The Apprenticeship Model Broke — Now What?

Jen Leonard: So let’s move into our “What Just Happened” segment. These are always my favorite, because I get the sense—and maybe you get this sense when you talk to lawyers, judges, and legal audiences—that people who are working just don’t have time to focus on what’s happening in the broader tech landscape (which is where a lot of the really interesting stuff is happening). And it might not be that obvious yet in the legal field how quickly things are unfolding. So, what are we going to focus on this week, Bridget, in our What Just Happened segment?

Bridget McCormack: Yeah. We always have a hard time deciding what makes the cut for this segment, because a lot always happens in two weeks (trust us!). I don’t know about you, but I use a lot of cheat sheets to figure that out—like listening to other podcasts that do some of the aggregating and, you know, spotting the most important trends. But I think you and I both independently listened to a fairly long interview that Ethan Mollick (who we’ve talked about before on the podcast – he’s on the faculty at the Wharton School and is a really thoughtful scholar in AI, AI in education, and AI in training) gave recently. Even though he’s not a lawyer or in the legal field at all, you and I have found that his work is useful in thinking about how the legal profession might move forward with this technology.

Bridget McCormack: So he gave this interview. The interviewer was Joel Hellermark, who I guess is Sana’s CEO and founder. I didn’t know Joel myself before listening to this interview, but he’s clearly quite knowledgeable about AI and has been building with AI for some time. And it was a wide-ranging interview, we could talk about it for the whole podcast.

But I think what you and I both focused on independently (which is why it made the cut for us today) is Ethan’s discussion of apprenticeship-based professions. And law is clearly one of those. You know, for many generations now, the primary way people were trained as lawyers was they worked on many, many matters as young associates for partners and got maybe some feedback. Maybe it was positive, maybe it was negative. And hopefully after some, you know, many hundreds or thousands of hours of doing things over and over again, they kind of magically—this is how Ethan put it—magically became senior lawyers with knowledge and, you know, were able to be strategic partners to their clients. And that’s just the way it’s always worked.

Law is not the only profession like that. It sounds like it’s true in a lot of finance as well, and in a lot of consulting. It’s a similar apprenticeship model. Maybe banking was another area where he called it out—you work 150 hours a week just because we’ve always done it that way. And eventually all of that work, through trial and error, makes you a banker.

That’s the way it’s always worked. And as Ethan put it in this one sentence that stopped me in my tracks, he said: “The apprenticeship model broke this summer. It’s broken. It doesn’t work anymore.” And I think that’s exactly right. That’s what this technology has done. Because it’s already useful for juniors in any profession, but it’s also useful for senior-level lawyers or bankers or consultants or—what do you call someone senior in finance? A financier? I don’t know, I’m making it up now.

I recommend this interview with Ethan to anybody who’s in a profession where this is the way we’ve trained people for any amount of time, which turns out to be a lot of knowledge work. Because I couldn’t agree with him more. He’s just one of the most thoughtful people on the ways we have to kind of start over—about what it means to train people in a profession, and how important it is to really put thought into what that looks like.

And you and I have talked before about what part of that is the responsibility of law schools, and what part of that is the responsibility of practicing lawyers in law firms or other legal organizations. And I hope they can all get on the same page about thinking about what that looks like.

Ethan said, you know, one area where we have figured this out is sports. And anybody who has raised high school athletes—like my kids were swimmers—knows how true that is. I used to think these formulas were insane. My high school swimmers would have such a specific schedule of, like, the things that the coach would have them swim every morning and every afternoon leading up to the state meet, right? If they qualified for the state meet.

And then the taper was also—it was almost scientific, the taper. You know, she knew exactly how much to let off day by day by day leading into the state finals. And sure enough, in the state final, they would swim their faces off in some time that I would’ve thought was impossible. And it was basically science. She wasn’t like, “Just follow me, just swim how I swim.” That wasn’t what she did. It was really like a specific training program.

I also raised kids who had to learn how to play stringed instruments. And music is similar, right? I mean, there are some people who are really naturally musical. My kids weren’t. And they literally had to just, like, practice over and over and over again—like scales, and where exactly that note is on your strings and how to make it sound. And so we do this in other areas. I know it sounds a little boring to do whatever the lawyer version is of practicing your scales, maybe—but maybe it doesn’t have to be. Anyway, there’s lots of other really interesting things about this interview. You probably have some that I haven’t mentioned, but that was the one that really made my brain, you know, start moving. It’s why I started building my custom GPT for training arbitrators.

Jen Leonard: It’s funny you say that because as you’re talking, I want to jump in to create some sort of custom GPT for law firm associate training. Because what you’re describing with your kids, the piece that Ethan’s talking about, that we experienced as junior lawyers—if you were in a law firm—comparing it to your kids and music, like, what you got was practice.

You did get practice at doing all of these tasks over and over and over again, right? Like, you did legal research memos, you drafted complaints. It was boring. And you only got to do, like, these minute pieces. And frequently, if you were like a third-year associate, it was hard to see what the point of it was because you didn’t have somebody—unless you had a great mentor—explain to you, like, “This is how it fits into what we’re actually trying to do for a client.”

It just felt like you were running on a hamster wheel, and your point was just to generate hours—which it is. But you did have the practice. What you didn’t have was the intentionality of the coaching. And there wasn’t really value placed on the coaching beyond, like, “We don’t want the associates to leave. We’d like them to stay to keep generating revenue.”

And the science piece of it—which, like you’re saying…and it was just funny when you were saying, “Just follow me. Swim how I swim.” I’m just picturing, like, a bunch of people in suits and briefcases swimming. I would love to see if you created a custom GPT and said, “Help me figure out what the science is of learning how to develop as a lawyer,” and what it would look like to actually coach and create reinforcement learning, for lack of a better term, along the way. And combine that with practice in a way that would actually produce a lawyer.

Our friend Jae Um always says, “What do law firms actually make?” And her answer always is: law firms make lawyers. But they do that through this process that’s rooted in apprenticeship, which, if Ethan’s right and that broke, we need a new approach that incorporates coaching and science and the practice. So I don’t know. Thank you for comparing that with swimming and music, because now I want to go play and have our next hobby be about what ChatGPT tells me about that.

The other problem with the apprenticeship model is—I don’t know how it works in high finance or consulting work—but in law, the problem is the economics. Because the reason that we had an apprenticeship model is because it’s underwritten by corporate clients who had to put up with training junior-level associates.

Bridget McCormack: Yeah. They were paying for those hours, so it made economic sense. And this might be why—you know, you and I have had many conversations, not just with law firms, but also with in-house legal teams—and I hear in-house legal teams putting together pretty creative training processes on long-standing matters. And it makes sense, because they are, as you always say, a cost center, not a revenue center, right? It makes sense that they can spend the time building a custom GPT for a training set for the new lawyers that are cycling onto a matter. Financially, it makes sense. I think there are some lawyers who are thinking about this. And this will be another area where it'll be fun—in our new format, you know? It should feel exciting. We can start from scratch.

Jen Leonard: Yeah. That would be great—to have a guest on thinking about how to develop some new ways to coach and train and infuse science into associate development. And we don’t have time to dig into it, but maybe in a future episode…the other part of the interview that I thought was really interesting was how Ethan was talking about, in his classes at Wharton, how he’s always been sort of AI-forward and had AI in the classroom. But once they moved from GPT-3.5 to 4.0, the AI was as smart as his students were. And he basically had to completely reimagine what happens inside a classroom. And I just find that totally fascinating. I don’t know about you.

Bridget McCormack: I thought it was fascinating and also inspiring. It wasn’t like, “Oh, and so then I banned it.” You know? He was like, “Nope. We just have to, like, rethink what teaching means for me.” So instead, he builds it in as a teammate, and as a mentor, and as someone who challenges the students in their work. And basically, he just figures out a better way to incorporate it as the technology changes. So it’s pretty exciting.

I mean, maybe there’s some of that going on somewhere, and we can again find some guests. He said so many things that have reverberated in my head since I listened to it. Another one he said—which I know is obviously true, but I hadn’t really thought about before—we often talk about the “jagged frontier,” which is a term we steal from Ethan, of the technology.

And what he means by that is—you don’t really know what it can do and what it can’t do until you try and see if it can do it. There’ll be things that surprise you that it can do, and other things that it can’t, that also surprise you. And so it’s got this kind of jagged edge of capabilities.

And he said, you know, humans also have that. Humans have jagged edges of capabilities, which is so true. Like, we think we all come to a new task in the same way—and we don’t. I mean, I have things that I think I’m excellent at and things that I’m not good at all. And they’re different from the person sitting in the office next to me or sitting in class next to me. Yet another way that technology feels a lot more human than not, when you think about it.

Jen Leonard: Yeah. The other thing that I took from it, too—listening to his class…like, he talked about how he teaches entrepreneurship at Wharton. So he talked about how they used to spend a whole semester, and by the end of the semester, the students had to come up with a PowerPoint deck, or a pitch deck, and a business plan. And now, by the end of the second week, they have to come up with an MVP for a product—because now they’re able to prototype. And you don’t really need to create a business plan on your own.

But when I was listening to that, what I was thinking was—and we’ve talked about this even today—I have so many friends who are very skeptical and, like, averse to using AI at all. And I hear these stories and it’s like this split-screen. And it’s okay, you can opt out if you want to and have whatever ethical or moral objections you want to the technology. But just know that there are people who are racing ahead to figure out how to use it in a million different ways. And so, in some ways, you’re either just shooting yourself in the foot—not taking advantage of it, not figuring out how you can do what you want to do better—or you’re not taking an active part in shaping it, and you’re handing over the control of the future and how it’s shaped to people that you don’t know or don’t trust. Not that Ethan’s students aren’t trustworthy, but I mean, like—the big tech companies. Or the people shaping the technology.

Bridget McCormack: Yeah. That was also fascinating. He said, like, within one class they can have an MVP and then create a bunch of AI users to test it and try and break it and give it feedback. And it’s really apparently changed that kind of teaching. So I think it probably could change a lot of teaching in legal, too. We just have to figure it out.

Main Topic: Harvey, Lexis, and the Legal AI Tipping Point

Bridget McCormack: But there’s some news—we can move to our main topic for today. And that’s…I think the main topic is about the partnership announced last week between Harvey (which we’ve talked about before and probably is the best well-known legal AI platform—and I think it’s in many global law firms at this point) and Lexis. And why don’t you give us an overview of what this partnership means and why we thought it was main-topic worthy?

Jen Leonard: Yeah, absolutely. So, Harvey is an AI-native company. It was born of this era, since ChatGPT emerged. And one of the things that makes it distinctive as compared to some of the incumbent companies is that it’s built by lawyers for lawyers to use. Which, by accounts of people who’ve enjoyed using it, is what makes it so popular among some lawyers who’ve tested it. And in the last maybe six weeks or so, Harvey has announced two significant partnerships that we both thought were noteworthy because they solve for a couple of problems that we saw with AI breaking through to create some of the unlocks that we think are possible in legal.

The first is a strategic partnership with the legal information giant LexisNexis. And the other is a tech partnership with document management system provider iManage. And iManage is what many law firms and corporate legal departments use as their DMS—it has the capability to store and organize all of the firm’s or department’s documents. For decades now.

So the interesting part about these alliances is that it really positions Harvey to take its AI capabilities that it’s been developing and marry those with high-quality legal content—which has been what’s kind of been stymying the ability to adopt these technologies in legal—and the ability, potentially, to integrate it into services directly that lawyers are using every day as part of their workflows.

So we thought this was really interesting because it really has the potential to create this breakthrough innovation in legal. I thought it was interesting in particular because lawyers are understandably concerned about trust and reliability and the security of their data—and the protection of client data, attorney work product, all of the things lawyers are concerned about. And the iManage partnership in particular seems to bake in the ability to ensure that all of the data is already protected and compliant with client protocols.

Then on the Lexis side—which appears to be more of a strategic partnership and less of just an API connection point than the iManage partnership, as I understand it—it really allows it to connect to all that public legal information lawyers use to do all the work that they do. As soon as I saw it, I thought it was really interesting. I know it created a lot of buzz in the legal industry. But curious what you thought about it, Bridget.

Bridget McCormack: So Harvey is built on OpenAI’s platforms. And OpenAI is a significant investor in Harvey and has been since Harvey launched, which was very early days—when ChatGPT was like a toddler or maybe like in second grade or something. Harvey had the ability to do pretty significant legal work using the deep research functions that OpenAI has. And then, because it’s AI-native—trained by lawyers—I think the founder is a lawyer and a BigLaw partner and a Google engineer.

They trained it specifically for legal workflows. So, legal memos, the kinds of motions and writing that lawyers do—that’s the kind of extra training. So it’s not just “throw your question into Deep Research,” which, as we know, does pretty well. Look at all the tests that are showing that. This is taking that up a notch already. So I think it was already getting a lot of substantive legal use by the lawyers in law firms who were using it. And like I said, I think it’s in many law firms.

But the LexisNexis piece feels like it ties a bow on that. Because there’s always the one—Deep Research might do a pretty amazing job, it hallucinates a whole lot less, we now know it shows you citations, you can actually check them better, we know from you—but with LexisNexis integrated into that research and writing workflow, it kind of closes the gap between “what did I have to then go double-check in another window, in another platform, to make sure the citations are all correct” before I can send it to my senior partner or file it with a court or file it with an arbitrator.

So that’s pretty interesting, because LexisNexis has the Shepard’s product, and Shepard’s can do that citation work now all within one platform. So the combination of Harvey and iManage and now LexisNexis—and Harvey itself being built on OpenAI’s products—does feel like a bit of a big deal in legal. I mean, I assume that the law firms that were already Harvey users and also LexisNexis users are feeling really good about the bet they made. You know, at a time when nobody knew what bet to make, right? Because it does feel optimized for next-gen legal practice—whatever that looks like. At least as far as I can tell.

And just, again, asking ChatGPT about the ways in which it all integrates for the user—I wonder if it expands beyond law firms. Like, I don’t know enough about Harvey’s pricing, or whether there’s going to be pricing for legal services organizations who could really use that kind of wonderful interface, right? Between their internal documents, their legal research needs, and the amazing capabilities of OpenAI’s products—built through Harvey’s interface, for lawyers by lawyers. So I don’t know enough about that. I don’t know if you’ve heard anything about that.

Jen Leonard: Yeah, I haven’t. It’s a really interesting question, because I don’t know how many public interest organizations use document management systems like iManage versus something like Azure, or something maybe even simpler that they’re able to connect with—or how that works. But it does seem like, especially if you’re a legal services organization that primarily practices in a single jurisdiction, that maybe there would be some sort of tier where you could get a subscription where you’re just getting Lexis’s content in Pennsylvania or Michigan. Because you don’t need the whole library.

Bridget McCormack: A consumer tier would be amazing. I’m sure they’re trying to do one thing well—and they’ve obviously been doing it pretty well. So maybe they’ll leave that for the next business. But it would be exciting if they did it. I don’t know if you saw this, but in Harvey’s last funding round, which I think was just February of this year, Lexis’s parent company—which is RELX, I think it’s pronounced “Relx”—was an investor in Harvey in February. So that probably was a smart move and probably gave them a bit of an edge in forming this partnership, whatever it is exactly.

Jen Leonard: I will say, I’ve seen some conversation back and forth on social media among small law firms that they can’t even get in touch with Harvey to get a subscription. So Harvey is purely focused on BigLaw. To your other question about legal services organizations—there’s been some conversation about whether there’s a coming split between the haves and the have-nots, when we (I think you and I both) share this hope that this era can be a chance to democratize legal services. Whether it could actually be exacerbated if it only stays in BigLaw.

Bridget McCormack: Yeah. There are so many important use cases for not only legal services organizations and consumers, but courts. You know I’m always thinking—gosh, imagine if courts that had to make quick decisions, often with imperfect information about really important issues that affect families, for example—imagine if they had this kind of powerful tool at their disposal to do a first cut or do a review of something. What a difference it might make in the service they could provide to the public.

So I’m still kind of hoping for those solutions, which is where I feel like we’ll see the real abundance, I guess is what we say now. I don’t know. That’s where I would like to see lots of opportunity for public servants who are really trying to do their best to serve people who usually aren’t showing up in court because everything’s going great. And be able to really take that service up a notch. That would be exciting.

So these kinds of partnerships, though, make me optimistic about our reluctant profession—about thinking differently. Thinking about doing things differently. And thinking about innovation differently. You know, at least differently than ever before in my career.

Jen Leonard: Yeah. And maybe, you know—not to be too Pollyanna about it, because I think there are real challenges—but we’ve talked in many contexts about the opportunities for pro bono practices in BigLaw firms. If pro bono practices at some of the top law firms in the world have access to Harvey, that now has these unlocks available…I mean, the sky really is the limit. There are no business model implications for being able to do all of these things at scale anymore.

And maybe there are opportunities with their partners in the public interest, and in government, and in courts to start to expand the learning. I mean, I think the story of technology is always: costs come down. And hopefully, innovation happens—whether it’s Harvey or somebody else who replicates Harvey’s business model in other places. But that seems like a place to get started, at least.

Bridget McCormack: Anybody who asks me about creative pro bono ideas—this is my creative pro bono idea. Like, find a couple of your engineers who are already building with it, and get together with a local clinical professor who’s got an idea, or a legal services office, or a court that has a big unrepresented docket, and build some solutions. Try them out. See what happens. Like, scale your pro bono work—don’t do it one case at a time at this point.

Jen Leonard: Definitely. The sort of prevailing mental model for using AI at the beginning was “start small and then expand.” But it’s missing the huge opportunities that AI creates. So now, it’s sort of “take big swings, and learn what you can learn, and then scale back.”

And it’s starting to feel exciting when you see these fences being taken down between systems. We were talking about the Hard Fork podcast before we started recording, and Casey and Kevin had the mayor of San Francisco on today. And as someone who used to work in city government, I was really curious—because they asked him about AI, and he was saying how they have 58 departments in the city of San Francisco. We have 33 in Philadelphia. And they don’t talk to each other. They’re all siloed—much like every big city. And the opportunity to be able to have easier ways to aggregate data and talk to it and analyze it in ways that can actually help the citizens of your city is really exciting.

Bridget McCormack: Yeah. That was a fascinating part of Ethan’s interview too. He was saying, yeah, a lot of businesses have the tendency to start small. “Let’s do a small pilot, see how it goes, and then we can scale from there.” And he’s like—why? Why have it summarize the document? It’s been able to do that for two years. Like, just do the thing after the summarizing. I like the “swing for the fences” approach. It’s not super comfortable for lawyers, but…hearing him articulate it?

Jen Leonard: That’s why I love that story that I keep going back to, and I use in every presentation—about that firm in Georgia that got the $7.2 million verdict by using ChatGPT deep research. Because it is like…you hear lawyers use it—in every conversation I’m in they say, “Well, I’ll take an article and I’ll summarize it and send it to one of my partners.” And it’s great. But it’s like, you’re thinking so small. And we’re just used to thinking small. And it’s scary. But I just think we could do so much better and bigger. And that’s why I think we both thought the Harvey story was interesting. I mean, Harvey generates a lot of passion—like, passionate feelings in the legal tech community as well. And I don’t know whether you wanted to share some of the sort of exchanging views on it,

Bridget McCormack: I mean, in the legal tech community and just the legal community, right? I mean, there are sort of like devotees. I think Meghan Ma at Stanford says that it’s totally reshaping how law firms train and develop junior lawyers. There are now, sort of, they call them “Harvey associates.” Like, Harvey is in their workflow, and it’s kind of how they work. And then there are lawyers who have really negative reactions to it. It’s never completely clear to me that it’s because of dissatisfaction with the platform—and I should be really clear, I’ve never used Harvey at all. I have no knowledge about it.

But the fact that there is so much emotion about it in the profession, to me, tells me it’s probably doing something right. When people are talking about you—no matter what their point of view is—it probably means you’re breaking through in a way that other legal AI startups have not yet. And there are a number of legal AI startups that are getting tremendous funding and tremendous traction, but none quite like Harvey. Right? I think that’s fair to say.

Jen Leonard: Definitely. And I think that’s probably a good place to end our conversation. But certainly that won’t be the end of the discussion about Harvey, and all of the other developments happening in the AI landscape and legal. We’re so excited to be with our new collaborators at Practising Law Institute, and excited to expand our audience and learn even more as we bring in some new experts in future conversations—and then continue talking to one another about technologies we’ve never used but have strong opinions about their potential.