Summary
In this episode of 2030 Vision, Jennifer Leonard and Bridget McCormack dive into the evolving landscape of AI in the legal profession. They share personal experiences with AI tools, discuss the latest advancements in AI models, and explore the implications of artificial general intelligence (AGI). The conversation touches on the future of legal education, the skills that will define the next generation of lawyers, and why leadership and transparency will be critical in shaping the profession.
Key Discussion Points
- How AI is reshaping legal work and the skills needed for success
- The real-world benefits of AI tools, as experienced by legal professionals
- Major advancements in AI, including the latest model releases
- The approach of legal education in adapting to AI-driven changes
- The impact of AGI on law and society at large
- The role of leadership in promoting transparency and innovation in law firms
- How law firms and legal institutions can proactively embrace technology
- The necessity of strong interpersonal and ethical reasoning skills in future lawyers
Transcript
Jen Leonard: Hi, everyone, and welcome back to the 2030 Vision: AI and the Future of Law podcast. My name is Jen Leonard. I'm the founder of Creative Lawyers, joined as always by the fabulous Bridget McCormack, President and CEO of the American Arbitration Association. Hi, Bridget.
Bridget McCormack: Hi, Jen. How's your room over there next to my room?
Jen Leonard: It's amazing. So Bridget and I are in the same building, but in different rooms. We thought it would be cool to podcast together and then realized that we actually had to podcast apart to be together.
Bridget McCormack: Yeah, but it feels together-ish. I mean, I could come over there really quickly if I wanted to.
Jen Leonard: Totally. Well, it's lovely to see you, and welcome to everybody joining us. This, of course, is a podcast where we focus on all of the emerging technology in the AI landscape and try to break down what is happening in the broader tech world and what it means for lawyers. Then in every episode, we do a deep dive into a particular topic that we find interesting.
Today, you and I find ourselves in Chicago because we presented earlier on a panel together about AI and the future of education and work, which I know we both care deeply about. So we thought we'd share some thoughts on that.
Bridget McCormack: Yeah, it was a great discussion. I'm looking forward to having it again here.
Jen Leonard: For everybody else, as with every episode, we start with two smaller segments. First are AI Ahas – moments where we have played with AI in our personal or professional lives and found it to be particularly delightful. And our newest segment, What Just Happened? – this tries to get people up to speed with just that. It's very difficult to follow what's happening in the broader tech landscape, so we try to pick one or two topics each week that are relevant and explain why people should be paying attention (if at all).
AI Aha! Moments: How Deep Research is Changing Legal Work
Jen Leonard: So let's dive right in and hear about your AI aha for this episode, Bridget.
Bridget McCormack: I continue to work a lot with Deep Research, and I was working with it on a few topics this weekend. In one of them, I really wanted a visual of the structure I was describing in a memo. In my mind it needed to be a diagram, so Deep Research produced computer code—which, you know, I recognize as code, but I don't know how to code (I have no training in coding). But Deep Research made it so easy: it gave me a link to a GitHub page that allows you to just drop the code into it, and it produces whatever the code tells it to produce. The Deep Research memo also said, “Click here to copy the code and then go to this link.” And I did exactly that. I did exactly what it told me: I clicked to copy the content of the code and then went to the GitHub link, and it produced this beautiful diagram that was exactly how my brain was visualizing it.
I'm sure with a little bit of practice, I could probably even learn how to change that diagram by tweaking the code. Actually, I wouldn't have to know how to code it at all—I could probably just go back to Deep Research and say, “Great first try, can you make it a little different,” the way you do with these models, and then it would probably just create brand-new code for me. But I have to say, it's kind of exciting to be able to be on GitHub and feel like I had my first GitHub experience.
I keep hearing that we're not going to have to learn much about coding in the future—that we're just going to be able to produce things, build things in natural language, which I assume is what's coming. But it feels like even before we get there, everything's gotten a lot easier. So don't ask me any follow-up questions about coding or GitHub because I won't know the answers! But that was pretty cool, I have to say. So how about you?
Jen Leonard: I didn't even know enough to ask a follow-up question, but it's exciting to know that you and I can now access the realm of coding with zero expertise. So mine was sort of similar, because I was on a flight to come to Chicago and I have been working in my small business to figure out how to right-size current work and future work—how to keep enough in the pipeline, but also not be overwhelmed by work—to make sure that we keep delivering quality services.
And it seems like that should be an easier thing to figure out than it feels. So I opened Claude on the plane and asked it to create a dashboard for me. I gave it: here are the services that we deliver, here are the price points, here's where we'd like to be (but we want to avoid burnout). How can we think about this so that we have an at-a-glance idea of our capacity?
And like you said, the window opened on the right side after I gave the prompt, and I felt like I was in Inspector Gadget or The Girl with the Dragon Tattoo, because it just started coding. And like you, I have no idea what it is, but I could recognize that it looked like coding. I felt very cool, because I thought surely there are people on this plane who have no idea what Claude is, and it looks like maybe I'm a cool coder.
And it developed this beautiful dashboard that was exactly what I was looking for. It had a kind of red-light, yellow-light, green-light system. When you're in green, you know you want to be thinking about building the pipeline. When you're in yellow, you can make a decision. When you're in red, you really need to focus on getting the work off your plate or managing it.
The only issue was that all of the numbers were made up, except for the ones I provided in the prompt. Which makes sense—I mean, it doesn't have any actual data to work from. The numbers I gave it in the prompt (for revenue goals and things like that) were accurate. I imagine if I followed up and said, “Okay, the assumptions you're making are not correct; can you update with these numbers?”, it would generate a more useful dashboard.
But you can sort of see where it's going, where I don't need to access some separate software. I was trying to figure out how to integrate HubSpot and QuickBooks so that my invoices and my clients matched up. Like, it would be so great to have a world where I don't have to think about those things at all, because I don't know how to do those and I don't have the time. So that was my AI Aha.
Bridget McCormack: I don't think you're going to have to figure out how to do those. It's going to figure it out for you, which is amazing.
Jen Leonard: Yeah. And this is out of the scope of our conversation, but I saw somebody on LinkedIn the other day post something about “the coming wave”—to borrow Mustafa Suleyman's phrasing—for the software industry. If we are all just able to, especially as agentic systems become more sophisticated and AI can go off and access different places and come back and use that to populate something in an LLM, what does that mean for the business model of software?
Bridget McCormack: What does it mean if you plan to go do computer science in undergrad? I'd love to know how engineering and computer science departments are thinking about what they do. I know we're going to talk about how law schools are thinking about the future, but it's made me curious. I wonder how they're thinking about it in undergrad programs.
Jen Leonard: I'm going to mention the Ezra Klein podcast during our What Just Happened? segment, but this is exactly what he was talking about this morning. He was talking about marketing majors in college and how much of that work is being increasingly done by AIs, and how colleges are responding to that and trying to create new value for their soon-to-be graduates in a world where those jobs are going to be dramatically changed. And he said—and I agree with him—that there's just not enough public dialogue about how rapidly some of these markets are going to be disrupted and what we do as a result.
What Just Happened: OpenAI, Anthropic & xAI—Inside the Latest AI Model Releases
Jen Leonard: So moving into our What Just Happened? segment. So much is happening so quickly, and certainly a lot has happened since our last recording a couple of weeks ago. But maybe you can share with us, Bridget, a little bit about the model releases that we've seen in the last week to 10 days?
Bridget McCormack: Yeah, there's been a lot of action in the model releases. The first is Google introduced this AI “co-scientist,” and I can only describe what this is (this is not something I have played with at all—I’m not even sure I have access to it). It's a multi-agent system built to help researchers with hypothesis generation and experiments, in service of better science and faster scientific discoveries, and I'll be eager to see what it produces.
And then most of the other frontier companies also had releases since you and I last recorded. Elon Musk's AI company, xAI, released Grok 3, which is its latest large language model that was trained on a massive supercomputer. It's apparently ten times the compute of the previous model. It's an advanced reasoning model that apparently has benchmarks coming in better than OpenAI’s and Google’s models, at least in some areas.
For the legal profession—I don't know if you saw this, Jen—but one of the things that xAI advertised when it was released was that it was going to displace courts, that you could take all of your legal questions and legal disputes directly to Grok 3 and it would handle it for you in no time and for no money. I mean, it seems like Elon Musk has a lot else to do, so I don't think that Grok 3 is going to replace courts tomorrow, but it was a pretty interesting signal that at least xAI views the legal profession as a target. I don't know. We'll see.
Moving on to Anthropic (which is the company that builds the Claude model, which I think you and I both use pretty regularly—I still do), they released the latest model, Claude 3.7 “Sonnet.” It’s a hybrid reasoning model that can switch back and forth between quick answers to questions that don't require deliberation and step-by-step problem solving when needed, and then back to a faster mode for simple queries. You can ask simple questions or complex questions, and you don't have to switch between models—something you still have to do with OpenAI.
And I don't know if you have this experience, but I will sometimes pull up my ChatGPT account and it's in, you know, GPT-4 or GPT-3.5 (whichever reasoning mode I was using last time), but I have just a simple question and it starts over-reasoning for the simple question. And I'm like, “Oh, come on, ChatGPT—I didn't mean for you to give me a 10-step answer to my simple question,” which is something that I think all of these companies are going to solve before long. In fact, I think Dario Amodei said it on a Hard Fork podcast this week: they're all getting to a place where the model should be able to figure out how much reasoning is needed to answer the question, instead of the user having to figure it out and make sure you're in the right model when you ask the question.
And then finally, OpenAI rolled out GPT-4.5 to sort of mixed reviews. I don't know if you've been following that at all. It's advertised as imperfect, but getting there—on the way to GPT-5 (which was also a way of telling us that GPT-5 is coming). GPT-4.5 is allegedly a little bit friendlier—a model that you'll want to hang out with a little more, to the extent that's a thing. But even Sam Altman, in his posts about it, was like, “Yeah, it's expensive and it's getting there, but you know… hang on, GPT-5 is coming soon.”
I don't know exactly what that means, but it sounds like by spring sometime we're going to get that model that combines all the capabilities into one place. At least that's what it sounds like GPT-5 is going to do. So we're not going to have to think through whether we're in a reasoning model or a non-reasoning model when we ask questions.
So that was a lot. Did I miss anything?
Jen Leonard: We probably always miss things, but those were the highlights. And I played around a little bit with 4.5. It doesn't really feel all that different to me. I know people say it's friendlier, it has a more lovely personality—for me, I use Claude for personality. I think Claude has a very nice personality (and we'll talk in a minute about some of the answers Claude gave us as compared to ChatGPT).
But I am very much looking forward to the day when we don't have to select among these models, because I don't really know how to choose a model. I mean, it gives you little hints, but I don’t really know. It makes me think of this little sign my mom bought me that sits by my computer that says, “Hold on while I overthink this.” Yeah. And that's what I feel like with the reasoning models sometimes. I'm like, I did not mean to get an advanced PhD-level response to this—I just need to know something simple.
But, you know, all of the releases are really showing how advanced the capabilities are. And I have really enjoyed Claude 3.7, Sonnet, and Dario Amodei said that it will soon be connected to the internet. I'm curious whether I will then migrate to using Claude more frequently, because I mainly use ChatGPT since it's connected to the internet and Claude isn't.
Bridget McCormack: Yeah, I know—that will be interesting to see. I think a lot of people feel that way. I mean, right now I think ChatGPT holds on to most users because of that functionality that people have come to expect. And so it is curious that it's taken the Anthropic team this long, but I don't know. They have other things that I think we all like about their models. So we'll see.
What Just Happened: AGI on the Horizon? Why Experts Think We’re Close
Bridget McCormack: But that brings me to the second topic in the What Just Happened segment of our show, which is that it seems like—not just because of these rapid releases of new models, but in a lot of the conversations the leaders of these companies are having—everybody feels like we're getting a whole lot closer to AGI, that it might just not be very far off. In fact, you hear people say maybe even later this year, and if not that, then probably next year, that we will have artificial general intelligence.
Let me pause for a minute so you can remind us what that means. And I know not everybody likes that term (including Dario Amodei), but what are we picking up on about how close we're getting to AGI and whether we're preparing for it appropriately?
Jen Leonard: I think there are all different kinds of definitions as to what this is and when we'll know that we're there. I think of it as the point at which AI systems become as capable, if not more capable, than the smartest human beings across all fields, and can essentially do anything cognitively that we're able to do now. And then of course ASI is artificial superintelligence—when it exceeds our intelligence. But I just have had…
And what do they call it? Like “vibe-coding” in the AI community. I've had this vibe absorption of these conversations I'm hearing in different places. From the Hard Fork interview that you mentioned with Dario Amodei—they talked about the AI conference in Europe in February and sort of critiqued a lot of the commentary happening at that conference as not sufficiently recognizing that AGI is on its way, focusing almost entirely on opportunities and liberalization of model development and accelerationism. And in that conversation, the upshot I think was that people aren't really comprehending what AGI will mean for all of society. And it's not that we shouldn’t look for the opportunities; it's just we also really need to start wrapping our heads around what that means.
And then just this morning before we recorded, The New York Times released a new podcast interview that Ezra Klein did with Ben Buchanan (who was the AI advisor in the Biden administration). And they similarly were talking about these powerful AGI models that are on the way and what it means when they get here. I think Ezra Klein's position was that we're just not at all prepared or really having sufficiently sophisticated and creative conversations beyond “there will be job loss and we'll have universal basic income” or “jobs will be lost and new ones will be created—that's what always happens.”
But really, like, what does the future look like? And it tracked a conversation I had with my friend Trish, to whom I am constantly talking about AI and she constantly does not want to hear about it. Finally, she said to me, “OK, you keep telling me that these powerful systems are already here, that they are going to impact learning and work. But what does that actually mean for me as a citizen of Earth? How should I actually prepare for what's coming ahead?” It was such a great question to me, because I don't think I've put enough thought into—as much as I think about it—what it really means on a personal level. What does it actually mean?”
So of course I did what I always do: I went to the models and I asked them. I asked them, sort of as a citizen of Earth, “How should I be preparing for the arrival of AGI?” And I thought it was interesting because I asked Claude and ChatGPT (and we’ve talked about personality differences), and I feel like the tone of the outputs was different.
So Claude said, “Preparing for the potential emergence of AGI in the next couple of years is a very thoughtful consideration. Here are some practical approaches: Strengthen creative thinking, ethical reasoning, interpersonal skills, and emotional intelligence. Build general resilience for technological change and uncertainty. Maintain a diverse skill set that can adapt to shifting technological landscapes.
And then it ended with a sort of topical box called “Perspective,” and it said, “Remember that AGI development will likely be gradual rather than sudden. So focus on how you might adapt to and benefit from advanced AI rather than only preparing for worst-case scenarios. Consider how you might help shape AI's positive development.” So that was Claude's perspective.
I then asked ChatGPT, which had some of the same recommendations but also some different things. It talked about experimenting with current AI tools to understand their strengths and weaknesses.
But then the tone really shifted: AGI could disrupt industries, jobs, and societal structures rapidly – focus on developing cognitive flexibility. The economy could shift dramatically – consider diversifying investments into AI-related industries. Have financial buffers in place in case of market turbulence.
And then it ended by saying, “If AGI arrives in the next two years, society will be in for the wildest ride in history, but preparation, adaptability, and a forward-thinking mindset will give you an edge.” So I don't know that I have any really concrete ideas—other than investing in AI-related industries—that come out of those things. But I thought the tone difference between the two was really interesting.
Bridget McCormack: Yeah, and it sort of reflects, I think, the tone difference between the founders of each company. I don't know—I’ve listened to a lot of Sam Altman and I've listened to a lot of Dario Amodei. Even though I think the Anthropic team has been focused on safety (that's like one of their calling cards), Amodei does a pretty good job at identifying the positive future that at least we'll see, in terms of scientific discovery and medicine and some of the things that, you know, can't come fast enough in a way. And while focusing on safety, I think he does a nice job framing it.
I feel like sometimes Sam Altman is a little bit focused on—well, I don't know, I guess we'll figure (you know, government will eventually figure it out). You know, it's all happening, like it or not. And I'm always like, remind us why there's some good here. And, you know, you and I talk a lot about the potential good, at least in the spaces we occupy. But I don't know, he feels a little bit like, “I think you guys will figure it out,” a little more nonchalant—
Jen Leonard: I totally agree, like pass-the-buck from Sam Altman and I think Dario Amodei—almost, to the extent that I can understand his feelings through his interviews—seems to agonize a little bit with how do we maximize the good here while minimizing the harms? And I had the exact same reaction as you did. I was like, it feels like Dario Amodei and Sam Altman responded to these questions. I don't know what's going to happen, but it's going to be wild.
Bridget McCormack: Just buckle up, you know?
Main Topic: AI & Law Schools—Are We Teaching the Right Skills?
Jen Leonard: So we will move into our main topic today and share a little bit about a great conversation I think we got to have this morning with some other forward-thinking leaders: Andy Perlman, the Dean of Suffolk Law School (who also has a new role leading innovation at Suffolk generally, across the university), and Nancy Green, Chairman, President and CEO of Miles & Stockbridge.
And the conversation really was about what happens in this AI/AGI/ASI future to the legal profession and the skills that lawyers need and the way that we teach law students. Our panel followed fantastic remarks from one of our favorite thought leaders, Jordan Furlong, who really went through and described—and I'm looking at my phone here because I took a picture of some of the skills that Jordan talked about. He had a slide that was basically the skills that we know lawyers to be valuable for historically (like drafting summaries of documents, analyzing case law, writing memos and briefs). And he just sort of had a big X across all of those. And you could see, like, if you're using these systems every single day… I think I said this to you last night: I can't believe that I existed in a world where document summarization was actually something that we did. That seems bananas to me now.
Jordan talked about, well, if those things are going away as the skills that we need, what do we need for the future? And I won't read through the whole list because there are many of them, but he breaks it down into the knowledge that we need (the what), the skills that we need (the how), and the ethics that we need to focus on (the why).
And he talks about knowing legal processes and sources of law, understanding legal reasoning, and understanding threshold concepts in numerous legal areas versus a doctrinal deep dive into the topics we study as first years. In the “how” bucket, he always separates hard skills and soft skills as technical skills and human skills, and really thinks the future is about developing human skills that include advocating and negotiating, building relationships, displaying empathy, facilitating solutions, and resolving conflicts.
And then on the ethics front—I know Jordan has been writing very powerfully of late about advancing and defending the rule of law, and something that I know that you and I focus on (and you have done amazing work around) is fixing the civil justice system so that we can fortify the rule of law. We’ve really, Jordan said explicitly, dropped the ball as a profession there, which has led to a lot of the undermining of the rule of law. A great kickoff from Jordan, and then a great discussion among our panel,
Bridget McCormack: Stepping back for a minute. I found the conversation so interesting to me because a year ago, it wouldn't have been as advanced as it is now. I feel like we all jumped right into talking about where we're headed and what it means and how to get from here to there, instead of the conversation I feel like you and I were having a year ago with most legal audiences, which was like, “No, seriously, guys.” Basically our presentation was, “No, really, you guys… really.” We did that for a year—we were like, “No, really, you guys, really.” And now we're past that. Everybody is—and it was a packed room and tons of nodding heads around the room. During Jordan's presentation I was looking around and everybody was like, “Yep, yep, we're not summarizing documents anymore. So what are we doing?” And that's refreshing.
It feels refreshing to me that you have big packed rooms where everybody's actually just sleeves rolled up: What does this look like? Where is our value add if we're a legal business? Where is our value add if we're a law school? (I'm not saying there were a lot of schools in the room—there were not.)
But Andy is a pretty good representation of somebody thinking about what the future of legal education might look like in a world where lawyers need the skill set that Jordan described, instead of the skill set that, you know, law schools have been teaching for a long time. So that's sort of stepping back, big picture. I'm just happy that that's where the conversation is now. It's like, how do we get from here to there? Because there's so many smart, creative people in those rooms. I'm just glad to have them along for the ride now. Like, yeah, let's figure this out, right?
Jen Leonard: And can I just say that that tracks my experience talking with law firm audiences—even from the fall. I feel like maybe a lot of lawyers spent winter break playing around with ChatGPT or something, because starting this spring, the Q&A segments of those presentations have gone from, “Well, this hallucinates, this is not accurate, we need to be careful about ethics…” And it's not that they're not thinking about that anymore, but that was almost like the denial stage.
And this spring, I have just noticed a huge jump in the sophistication of the questions—in the sort of solution-oriented nature of the questions. Like, “OK, we get this… what do we do now? How do we test things? How do we experiment without compromising data?” So all of this tracks. We run surveys with clients that also track this in attitudes and awareness—that people are much more aware and using this technology much more, which is pretty stunning in just a year's time (and even, I would say, probably six months' time).
Bridget McCormack: Yeah, I was presenting last week in Riyadh at a panel about arbitration, and the panel was mostly about some of the traditional advantages of arbitration over… we were sort of comparing it to international commercial courts, which have emerged as another way for cross-border dispute resolution that might make sense for certain disputes.
But there was one question where I talked about the advantages that ADR providers might have if they can adapt AI to bring point solutions or even broader solutions to cross-border dispute resolution. But one of the questions after the panel was like, “Shouldn't you guys basically be talking about AI more? Like, isn't AI really about to disrupt just about everything?” And I was like, “Well, yes.” But I was trying to just, you know, bring this along.
Then somebody else pulled me aside after the panel and he was like, “You know, I really think it's already there and we're moving a lot faster.” I was like, “Dude, I know. I'm just trying to bring everybody along. I want us all in the conversation—I didn't know where everybody was in the room.” So I think that's actually encouraging.
OK, so back to this morning. I thought that it was great to have Andy on the panel. I mean, probably anyone who listens to us regularly knows that we think of Andy Perlman and the work he's doing at Suffolk Law School as a great example of how a law school can be thinking about teaching the new competencies or skills (or however we want to think about them) that law graduates might need to be competitive in a changing profession.
In a way, Andy's been at this for a long time—long before AI was having a tremendous impact across our profession. He was thinking about how to train law students to just be more aware of legal operations and efficient processes and where technology fits into all of that, and doing that in lots of creative ways. But I thought his comments about the front end of the market and the back end of the market, and who has their hand on some levers to make a difference in what legal education looks like, were especially interesting.
So he said he's already seeing in some of his admitted-students events a more sophisticated buyer. Prospective students are showing up at these kinds of events asking questions about what kind of training they're going to get to prepare them for a totally changed profession. And he's seeing people show up at his events who might not have been there, you know, two years ago, four years ago, six years ago—because Suffolk Law School isn't in the T14. And, you know, it used to be that you thought about going to the law school that was ranked the highest in this very weird and anachronistic ranking system. But he's seeing some of that changing, and that's interesting. And I think that is one place where prospective students have some power and some leverage.
And then at the back end, it was more of a call to the lawyers in the room to pull the levers they have as purchasers of talent. If they could start really looking for students who are trained in some of these skills and competencies that are going to make a difference in the future of legal businesses, they could change what law schools do tomorrow. If instead of thinking about a law school's ranking and Law Review and some of these traditional markers of quality in the hiring market, law firms could make a difference right away in how law schools think about what they're teaching and what they're training.
So I thought that was pretty interesting. It's like he can do what he wants in the middle, but it's really kind of these outsiders at the front end and the back end of his business that have a lot of ability to make change. I don't know—what were some of your reactions?
Jen Leonard: Same here. It feels like what I was hearing from Andy is that for a while we've all paid lip service to things like, “We want students who are prepared to practice on day one,” and “We want students with interpersonal skills and judgment and who know how to leverage technology.” But I think his point is the market is not rewarding schools that are actually being innovative in that way and focusing on those core skills. They are relying on rankings and brands even as they express frustration that they're not getting graduates with the skills that they're looking for.
And so I think it was just a plea to employers to put their money where their mouths are. And I think you're right that not only would it reward Andy's efforts at leading innovative approaches to legal education, it would really amplify and scale what he's doing at Suffolk across legal education. Because if that starts to be the model, then schools really have to establish the ROI that they're providing to their students in that form, versus just a brand that is sort of attached to their credentials.
And I loved the exchange between Andy and Nancy—who is an employer, of course, of graduates of law schools. I think Nancy has a very thoughtful approach to two things that stood out to me. One is her strategy at her firm for generative AI deployment, and sort of shifting the mindset across roles—from everyone, from senior partners to junior associates and other timekeepers in business departments—to try using AI first (this AI-first mindset) and see what you can do with it.
And she is also providing, I think, some cover for junior associates to really think about and learn the technology, whereas I think a lot of firms are sort of hiding the ball from junior associates and others who are not partners. Which only elevates anxiety and makes people fearful of the technology. Like, Are they not telling me because they're going to get rid of me? And the answer to that might be yes, if that's what your firm's approach is—or it might be that they're just not being thoughtful enough about it.
The other thing Nancy talked about that I found really interesting—because I think the biggest asset that mid-sized firms like her firm and many others have compared to the global brands is they are able to develop real culture. And at a certain size, I think they're still small enough to be nimble and change and adapt and have, you know, in the right firm with the right leadership, a real (for lack of a better word) affection for the people that you work with and a desire to work as a team to move forward. Some of the big global law firms have the resources to test every tool imaginable, but across thousands of people who are moving all over the place, it's really hard to sustain culture or make your firm distinctive in any given way.
And she talked about viewing on-campus interviewing differently—that it's happening earlier and earlier in law school. And so their response has been to not even view it as a place to hire law students (which I found really interesting), but a place to plant a seed that her firm is the place that maybe they want to come to as mid-level associates because they care about culture. And they're not gonna pressure students at this point to sort of commit to something they don't understand. And I just thought both of those things spoke to a long-term leadership strategy of keeping your firm intact and the culture that you prize while also being nimble and innovative. I just thought both of those things were really interesting.
Bridget McCormack: She said one other thing that really stuck with me, because it resonates with things I hear from my own 20-something kids, which was that she herself is going to be stepping away as the chair of her firm—I think in two years (or maybe one year, I don't remember)—but explicitly because… I mean, obviously she's doing a great job; I'm sure she could keep doing a great job for another five years, probably another ten years. But her view was like, it's time to make room for new—I think she might've said “new generations,” but more generally just new leaders. There is a benefit, in the world we're living in where this technology is changing everything so quickly, to making room for new leaders.
And that is something, like, across our profession (and across government too). My twenty-somethings are always like, “Any day now you all could let us have a shot. Like, you haven't nailed it… but you could move over and let us have a shot, see what we can do.” I think in legal institutions that can be true as well, right? It can be easy to kind of hold on to leadership positions within legal organizations.
And there's obviously tons of value in experience—experience is something that you can't inject into someone or even have ChatGPT replace. I really think that's valuable and something that will remain valuable for humans. But there's something about letting new voices into the leadership conversations that makes a big difference. And she's a stunning example of that.
Jen Leonard: And I think both of them talked about—and in their day-to-day leadership embody—transparency. And I think the fact that she was in a public place talking about why she's stepping back, when she's stepping back, you know, what she hopes for the future. I just feel like there's an element of trust in a leader, in a world where things are so volatile, when you have just an inkling of what they're thinking and that there is a really thoughtful strategy for the future. And you talked about transparency in a lot of different ways.
And another thing that came up in hearing your story at the AAA was something—we can tease that we'll be at LegalWeek together talking about an online educational offering that we’ve had the chance to develop with your team—but one thing that came up in the course of interviewing your team is how competitive people became. They moved from this place of skepticism and fear and uncertainty around using AI to being competitive to find the best AI application and tool and resource.
And I think all three of you on the panel today are really transparent, really thoughtful, and thinking beyond—especially maybe in Nancy's case as a partnership—the end of the year and the profits per equity partner that are distributed. But really, you know, bringing everyone along at the same time, as you said. And maybe we're seeing in conversations like you had in Riyadh, maybe people are ready to be brought along at an even faster pace than we have been trying to guide them.
Bridget McCormack: Yeah, I think people are still a little bit all over the map, but we’re definitely seeing more folks who are ready than even in the fall. I completely agree with you—it feels like we’re in the next phase. And that’s exciting.
I still believe transparency about what we're doing—what we're all doing—is as important to building trust and confidence as anything else. I’ve said in many conversations that the technology is kind of here. It can do just about everything you want it to do, or at least an awful lot of what you need it to do. But the people still need to be brought along. And I think that’s happening in all kinds of ways.
I was just thinking about how it’s so much easier, as a user of medical services, to see the upside of this technology. Like, I really want my radiologist to have AI read my scans—and then I want to talk to her about what they show. Right? Maybe eventually I’ll talk to the AI, but right now, I want the human who knows me, knows my family, knows my priorities and what I care about—to talk with me about whether we’re just waiting three months for another scan or doing something more aggressive.
(And I don’t mean to sound like I have some medical problem—I'm fine! I just mean that the analogy works for me in medicine. Seeing the benefit of human plus technology feels great. It feels like huge upside potential.)
And there’s no reason why that shouldn’t be true in law too. We can be better with this technology.
Jen Leonard: I mean, we've had conversations about medicine versus law (or compared with law). I imagine—knowing nothing about being a medical professional—that they're having a similar experience in the sense of like, I can't believe that I spent time in my life as a lawyer summarizing documents. I spent a lot of time—like Andy gave the example today of doing a 50-state survey as an associate. I remember doing 50-state surveys as a summer and it took weeks, an unbelievable amount of time, to review the law of 50 states. Deep Research can do that in under two minutes now, I'm sure. And I'm sure doctors and medical professionals will have a moment where they're like, “I can't believe we used to read and diagnose based on scans or X-rays. Now we focus on the empathetic piece of it—the part about helping people make decisions better.” And so maybe professions like law and medicine are moving in similar directions and can learn from one another how to teach better.
This was a really fun conversation. There's so much ahead in the landscape of legal education. And we were talking before we jumped on the podcast about how much opportunity I certainly think there is for forward-thinking law schools and professional development leaders to really make their mark by taking advantage of this moment and helping people along in new ways. And I think Jordan and Andy and Nancy—and you—gave us a great sense of what that might look like.
Bridget McCormack: And you organized the discussion beautifully, like you always do.
Jen Leonard: Thank you very much. It was a treat and a delight. I always learn from all of you, and talking it through even helps me learn even more. So thank you for letting me sit in your offices today. It's been so great to podcast together in separate rooms (as we always do). And thank you to everybody out there listening. We look forward to seeing you the next time on 2030 Vision: AI and the Future of Law. Take care.