Introduction to Generative AI & Why Lawyers Should Care

Summary

In this inaugural episode, co-hosts Jen Leonard and Bridget McCormack introduce themselves and lay the foundation for their new podcast exploring how generative AI is transforming the legal profession. They reflect on their own journeys—from legal practice and the judiciary to innovation and education—and explain why they’re passionate about creating a space for real-time conversations around AI's impact.

They emphasize that lawyers don’t need to be tech experts to engage with AI—and that doing so is no longer optional. The legal profession has a window of opportunity to adapt, learn, and lead in this era of transformation.

Key Takeaways:

  • GenAI is a game-changer for law: It creates new content using language models, making it highly relevant for the text-heavy legal field.
  • Lawyers aren’t behind—yet: The tech is still new, and there’s time to catch up, but active engagement is critical.
  • High impact across legal roles: Judges, law firms, in-house counsel, and law schools all face transformative opportunities and challenges.
  • Access to justice boost: Courts can use AI to better serve self-represented litigants and improve public trust.
  • Legal education must evolve: Schools that adapt with tech-forward curricula and personalized learning tools will lead the future.

Transcript

Jen Leonard: Welcome, everybody, to our new podcast, 2030 Vision, AI and the Future of Law. We’re really excited to convene a conversation between the two of us on a regular basis to talk all about how we perceive the future of law to be impacted by AI as it unfolds in real time. I’m Jen Leonard, and I’m so thrilled to be joined by the Honorable Bridget McCormack. Maybe we could kick off by just sharing a little bit about our backgrounds and why we’re so interested in this topic. And Bridget, maybe I could toss it to you to kick us off.

Bridget McCormack: Absolutely, it’s great to be here. I’m really excited to be able to have yet another excuse to talk to you about my favorite topic with my favorite thought partner on this topic. There is so much going on in AI generally and in the legal profession in particular that tying it all together has been something I spend so much time thinking about, and you’ve been the person that I’ve relied on the most to help me think through some of it. So it’ll be fun to do that kind of out loud in front of people.

Bridget McCormack: I’m excited to be here. I’ve had a number of chapters in my career. Right now I’m the CEO and President at the American Arbitration Association, which is the largest provider of ADR services in the world—we administered over half a million cases last year outside of courts. Before I started that job, only about 15 months ago, I was the Chief Justice of the Michigan Supreme Court. I served on the court for 10 years and was the Chief throughout COVID, which was interesting. It gave me an opportunity to really innovate and figure out new ways to provide dispute resolution services in a public dispute resolution system.

Before that, I spent a decade and a half teaching at the University of Michigan Law School—also a fun chapter in my career. And before that, I taught for two years at Yale Law School. My very first job was as a public defender in New York City, so I was a real lawyer doing real things for five years. It feels like I’ve had my hand in lots of different parts of the legal profession. And I know you have as well. Tell folks about your background.

Jen Leonard: Yeah, sure. I’m really excited to talk to you too, mainly because I am driving my neighbors batty continuing to talk about generative AI and they really want me to talk to somebody else about it. And I love talking to you about it because I just find it to be the most fascinating topic that I’ve ever encountered in my life. Just to give you a little bit of my background, you and I met working together and teaching together at Penn, focusing on innovation and the future of the legal profession. I recently left to launch my own company focused on teaching creativity and innovation across the profession so that we can really harness these emerging technologies and other dynamics to do things differently in law. I spent the 10 years before that working in legal education, first in law student professional development—trying to prepare new lawyers for the practice of law and teaching them all the things that I never really learned formally: how to build relationships, how to grow teams, how to report up to other people, how to lead, and how to infuse technology in everything that they do.

Before that, I practiced for 10 years, first in a law firm as a litigation associate, and then as Chief of Staff to the City of Philadelphia Law Department, which is an incredible place to work. My husband, whom I met there, refers to it as being an “emergency room lawyer” — you never knew what was coming through the door. You built skills incredibly quickly, and it was a ton of fun, and I really enjoyed that job. And my first job out of law school was clerking for our state Supreme Court here in Pennsylvania. I think we’ve had some overlap in some of our experiences and then different things that we’ve done, but both of us share a passion for doing things better in the future. And so why are you excited, Bridget, to have this conversation—to add yet another thing to your crowded plate of obligations and activities—to talk about generative AI?

Bridget McCormack: Yeah, first of all, I apologize for leaving out my Penn chapter. I didn’t mean to do that, because in some ways that was my most important chapter, even though it was a very part-time part of my employment. It’s where I met you, or where I got to work with you first, and so that was great. The Future of the Profession Initiative was a pretty awesome collaboration, I think, with some big goals about some of the things I am most excited about regarding this new technology and connecting it to the business of law, the practice of law, the formation of lawyers, and access to justice.

Some of what I have viewed as the biggest problems in the legal profession and the way it serves our neighbors—our communities—feel like they could be really impacted positively by this new technology. I also think that lawyers are risk-averse by training, and that’s a good thing for lots of reasons. So they have been slow to adopt the technology. In most legal rooms that I’m in, you hear a lot about the risks and the concerns about the technology. And of course, it’s important to be aware of those and be thinking about those. But I think it takes lawyers a little longer to jump in and see what all of the benefits are. And so I’m excited to have this conversation, to ignite more conversations like it and hopefully bring about lots of important change in the legal profession—a profession that I’ve spent my whole career focused on. And you and I have been doing this in class; we can scale that, and I’ll let you talk a little bit about that. But why are you excited about this new venture we’re embarking on?

Jen Leonard: Yeah, well, as you mentioned, we had the chance to teach a class on generative technology, really the first year that generative technology emerged. It was the most fun class I’ve ever taught because it was a completely blank slate and we had no appellate case law, no law review articles to read. So we were listening to podcasts and having online conversations with our students in real time, who were really, really curious and creative and also critical, and helping us think through the risks and the opportunities.

And it was such a great experience. We were only able to talk to 20 people at a time, once a week. And when I talk with lawyers and legal professionals generally, one of the things I think they’re struggling with is that they keep saying, “I feel so far behind. I feel like I can’t catch up or keep pace with what’s happening.” And I always remind people that this current version of the technology is really only a little over a year old. So while it’s understandable—there’s so much conversation and you feel left behind—nobody’s really behind unless they’re refusing to participate at all. But lawyers are also super busy, as is everybody. And so I think it’s hard for them to figure out how to fit in some ongoing education and awareness of the changes. And I thought it would be really helpful for others to have access to an ongoing conversation in the legal domain about what the implications are. You and I follow several thought leaders in the broader tech landscape, but I don’t think those conversations (which I find enormously helpful in staying current) always reach the legal profession. And so the combination of our risk aversion and then not really being privy to the conversations about what’s changing creates an opportunity, I think, to advance that conversation and make sure we are staying current. And also, I just selfishly love to process this stuff with you—and we do that frequently. So we figured, why not press record, and hopefully this could be useful to other people as we go in the profession.

Bridget McCormack: I had that same experience with people saying, “Oh gosh, I feel like I missed the boat or I’m behind and I don’t even know where to start or where to catch up.” And I get it. I mean, I will tell you, just in preparing for our class—or maybe even before that, when we first started talking about the technology, when I first started using it when it was released and kind of learned about the decades of research that built to that moment—as far as I was concerned, AI just kind of appeared in November of 2022. Turns out that’s not true; it was around for a long time. But in a way, you can get caught up pretty quickly given that we’re not that far into at least general use of this new phase of AI, generative AI. And so I hope that’s one of the things people take away: I think anybody can get caught up and roll up their sleeves and get involved. And we’ve had great responses when we’ve done presentations to legal audiences, really diverse legal audiences. People have pulled us aside and said, “Can you come talk to my law department? Can you help us think about how to approach it?” And so it’ll be fun to hopefully give a lot more people tools to do that as we go.

Jen Leonard: Yeah, absolutely. And I was brand new to the background of AI as well. I did not realize the 70-year history that led to the emergence of generative technology. I think it’s also helpful to understand that background for a couple reasons. One, to understand—especially in the face of the skepticism and some of the criticism of thinking about generative AI in law practice—that we are very late to the conversation in terms of the entire history of AI. So it feels as though it is this sort of fad or hyped technology that just emerged and will go away very quickly once it’s deemed not to be as capable as we hope it might be. But there are technologists whose entire careers are focused on advancing these technologies, and the breakthroughs that have happened in the last few years have only added enthusiasm, accelerant, investment, hunger to achieve more. And we could talk in our podcasts about the reasons why that necessitates some guardrails and regulation and careful thought, but looking at the landscape more broadly, I don’t think this is going anywhere. I think it will only improve. The investments will continue to grow. And I’m really excited about that.

For the reasons that you and I have spent our careers focused on the legal profession— because we’ve really been trying to tinker around the edges with not-so-sophisticated technology to try to solve some of the intractable problems—this feels like this enormous opportunity to leverage something more powerful than we’ve ever seen for good. And I would say that’s our shared worldview, recognizing that there needs to be critical thought applied and safe deployment.

Bridget McCormack: Yeah, it probably makes sense for us, just in this first episode, to level set on some of the fundamentals in the technology. I just never want to assume that people understand it. I don’t know why they would, especially busy lawyers who always have emergencies to handle. You know, I’ve been trying to write this lawyer article, as you know, for 50 years that has a great title: “Let’s do emergencies last.” And I can’t get rid of the emergencies because we always have something else to do. Would you just take a minute and kind of explain what’s new about generative AI? Why is it called generative AI? Why is it different from the AI that we’re familiar with in our Netflix feed?

Main Topic: What is Generative AI?

Jen Leonard: Yeah, absolutely. And I will also say I am not a technologist. Everything that I’m learning, I’m learning over the last few years as well. The other point I meant to make is: if you like learning, this is the moment to live in, because I feel like I’m learning every single day, all day long, as the technology unfolds. But what I’ve learned over the last few years reading about the background is that artificial intelligence as a body of study has been around for a very long time, and it’s been infused in everything that we experience day to day for about a decade now. 

If we use Netflix, if we use our GPS, if we shop on Amazon, if we do online banking, or we purchase insurance, we’re using AI-infused products. So they are already here. The difference is that in 2017, a team at Google encountered a breakthrough in artificial intelligence when they started using what was called a transformer architecture, which, as I understand it, is essentially predicting the next most likely word—or in AI parlance, token—in a series of words or tokens to produce new content in response to prompts from human beings. And this was a different way of using artificial intelligence. It’s not simply recognizing patterns and surfacing things we might want to buy, things we might want to watch, ways to get to different places, but actually producing new content based on its analysis and the proximity of words to one another in its training data. And so that creation of new content—and in the years since that 2017 breakthrough, its increasing ability to take data on which it’s trained and analyze it, summarize it, synthesize it, and make new combinations that produce new content that makes sense of that data—has been the breakthrough technology. 

The first couple of years with the emergence of ChatGPT were really focused on textual generation, so the production of essays and things in the style of Shakespeare that we’ve seen, that kind of thing. And now we’re moving into a world where increasingly the generation will be video and audio outputs. Really anything that humans produce, the technology is increasingly producing. So that is my understanding of why the breakthrough is what it is.

Then, Bridget, you know, if you want to talk about why the ChatGPT moment—which happened, what, five years after the transformer architecture—led to the urgent conversations we’re having now… 

Bridget McCormack: Please jump in and correct me whenever I say things that don’t sound right to you, because I am very much not a technologist. But like you, that’s part of what’s been so fun about this: the ability to kind of learn along with everyone else, because it’s new for all of us. And even for technologists, it’s got all these… (we can get to this later) but you know, it does all these things that they don’t perfectly understand and they can’t always predict, so I think GPT-4, which is the model of GPT that’s out now—when it started coding in April of 2023, Sam Altman was surprised by that (the founder of OpenAI). So it does make you feel like you can learn as everybody goes. 

But I think what happened when ChatGPT was released in November of 2022 was all of a sudden the public had access to it. And so all of us who weren’t working in Google’s labs or Microsoft’s labs or OpenAI’s labs…I wasn’t following what was happening, and there were earlier versions of ChatGPT that had been released and had been performing some of what you just described, but not at the level that ChatGPT could. And as far as I understand it, when the OpenAI team decided to release ChatGPT, they didn’t expect that it would make that much of a splash. They thought there would be—I don’t know—some number of downloads and maybe some people would start experimenting with it, but they were surprised by how quickly the public started using it. And it was five days to a million users? I mean, that’s unbelievable. There’s never been that kind of uptake for any kind of technology. And I think it’s so user-friendly that everybody… you know, anybody could say, “You know write a sonnet in Shakespeare’s voice about the rule against perpetuities.” Okay, that’s a really geeky example—you know what I mean? I mean, one of my first use cases was when I was going on a trip to northern Michigan and I wanted it to map out for me all of the hard cider places within 20 miles, and the things that it could do were so much fun that they had such quick uptake and lots of use, which allowed the model to get better and stronger and produce better and better answers. 

I think for lawyers in particular, this kind of technology was immediately impactful for those of us who were willing to dig in and figure it out, because words are, of course, currency for lawyers, right? That’s what we work in. We work in words. And when all of a sudden a data set is a group of words, well, now a technology that works on that data set is pretty relevant to what we do. And, you know, people don’t think of legal work as training data for technology. But in fact, when your training data is a group of words, a large language model is a technology built on a group of words. Our data is actually pretty structured, right? Legal opinions have a formula. We write in similar styles; regulations and statutes have a formula and they’re fairly structured. And so, in a lot of ways, I think this technology became immediately relevant to lawyers and legal professionals in a way maybe others haven’t, or have taken longer to be. Does that sound right to you?

Jen Leonard: Absolutely—the sort of templatized, uniform style of writing, the way that we’re all taught to write in similar ways. We each have our own style, but we learn in law school how to write like lawyers, and that’s reinforced over time. And then also the underlying data on which we would be training models in legal—as compared with the general models, which are training on everything (the good, the bad, and the ugly on the internet)—we’re training on some of the world’s most trustworthy sources: West publications, federal reporters, restatements. And then in our organizations, if you’re thinking about a law firm, your internal document management system; if you’re in a court, the opinions of the judges that sit on your court. And that’s what struck me at the outset: the potential power in our industry.

Those combinations of it being a language-based industry with a language-based machine, our templatized writing, and our trustworthy underlying sources—it’s sort of a recipe for impact on what we do.

Bridget McCormack: Yeah, now you just made an important point that I do think is part of the foundational knowledge base that will make sense for some of the future conversations we’ll have, which is: there are these models that are trained on all of the internet and, you know, there’s a lot of mess on the internet, so mess might come out of them when you’re using them. But there are other models that are trained on the legal data specifically. And some of these are commercial products that law firms are using now. But is it the case that even a large language model that’s trained on legal data—opinions, statutes, all of the above—is also trained on the rest of the internet? What are the ways in which lawyers should think about the different versions of generative AI out there that they might want to start learning about or even experimenting with?

Jen Leonard: Yeah, so my understanding of the technology is if you hear people talking about, you know, a certain law firm or organization building a model in-house, what that really means is they are building on top of one of these foundational models from one of the major tech companies. It’s most likely Microsoft in combination with OpenAI, and maybe we should even define who some of these players are for those who haven’t followed. OpenAI is a company that was a startup in 2015 that actually included Elon Musk as a founder. He later left the organization for reasons we don’t need to explore, but he also founded it with Sam Altman, who is now the president of OpenAI. So OpenAI is closely related to Microsoft in a complicated structure, but Microsoft uses OpenAI’s model, which is GPT-4. It’s the most current model of the ChatGPT family most people hear about. And so when you’re building something in-house, you are generally contracting with a company like Microsoft to use that foundational technology (which has been trained on the internet for its capabilities and other sources) and then working with that organization to create safety and structure around your own organization.

Obviously, ethics and confidentiality and security are critical for lawyers, so you’re fencing off the data that you’re training with so it’s not going back to that company or being misused in any way. But my understanding is that you can use the capabilities of those foundational models and then train them only on the data sets that are internal to your organization. You’re using the broad corpus of legal information that is the foundation for your research and work, and then your internal work product, and then fine-tuning it by testing it internally, looking at the outputs, assessing the quality, and benchmarking it over time. That is my understanding of how in-house training happens. And I know you’ve done a lot of this with your teams at the AAA, so you probably have much more knowledge than I do.

Bridget McCormack: Yeah, no, that’s exactly right. We are a Microsoft shop—using the Microsoft products has been what makes sense for us. And we’ve just been training small tools to expedite our learning. And I know we’ll talk about that in a future episode, so I won’t spend a lot of time on that. But when you hear about a law firm training its own model on its own data, it’s using the capabilities of that frontier model (as they’re called)—either OpenAI’s or Google’s, or I guess you could use Meta’s, but I think everybody’s using Microsoft—and then basically confining it to their own atmosphere and then feeding their own data into it so they can learn whatever they need to learn about their internal operations questions. But there are a couple of large language models that are now being marketed to lawyers that are general-purpose models, like the VLex model and the CoCounsel model. Is CoCounsel still a separate model at TR or is it now part of all of TR’s products? I’ve lost track.

Jen Leonard: As far as I know it’s still separate, but I could be wrong.

Bridget McCormack: No, I just know CoCounsel was the first—I think (again, correct me if I’m wrong)—I think it was the first large language model built basically on legal texts for lawyers. And it was sold to Thomson Reuters last spring, and I think Thomson Reuters is at least talking about maybe integrating it with its other AI products but it’s still a separate product you can use. 

And those are models you can just get on the market if you’re a lawyer. If your law firm is a smaller law firm and you’re not in a position to train your own model, but you think using the technology could bring efficiencies to your practice or your clients start demanding that you use it, there are models out there on the market that you can purchase. Okay, that was too long of a detour.

Jen Leonard: No, not at all. Not at all. I’m going to make it even longer and just say, again, to the point of responding to sometimes the skepticism in legal about whether this is just a fad or trend. You mentioned Microsoft; Google is in the mix as well, Meta a little bit. But the race among Microsoft, Amazon and Google to really dominate this market is based on their pre-existing relationship with millions of customers around the world. And the legal profession is a large contingent of that, but it’s only one piece of it. 

And if you think about Microsoft, all of their Office products are in most enterprise organizations around the globe. And so their bet is they can infuse generative AI into everything that people are already using and claim a larger market share than Google, for instance. And Google’s betting that it can infuse its technology in all of the Google products that we use. The one thing that I’ve been really interested in following is what is happening with Apple, because they are being very stealth and I think they’re about to make some big announcements—and they have… everybody has their phone in our hands.

We look through this tiny lens at our research products in legal and they’re, you know, hallucinating at X rate and “this certainly is the end of it.” But this is a much, much bigger rollout and deployment across all organizations and individual consumers. I find that really interesting, the arms race among these different tech giants and the implications it has more broadly for work. But maybe we can zoom out, Bridget, and think about the stakeholders in the legal profession and how—putting aside all the technology (again, the idea that I’m so fascinated by this is really stunning because I’m not a tech head at all)—so putting aside the details about the models and the training and the companies and all of these different things, why more broadly should the different stakeholders in the legal profession care? What are the implications for their individual roles? And let’s start with judges. They have, you know, maybe the most important role in upholding the public’s trust and administering justice. So what are the implications of generative AI for judges?

Bridget McCormack: Yeah, this to me is one of the most exciting frontiers of this technology in the legal profession. And I think I’m going to address judges and the public they serve all in one answer, if it makes sense. The civil justice system in America is basically a massive market failure in that 92% of Americans can’t afford help to navigate their civil justice problems. And that means for the most part, they either give up and they just don’t try to navigate them, or they try and manage them without lawyers. In state courts, in most cases, 75% of civil cases—at least one party—is unrepresented. And there are lots of state courts doing as much as they can with the limited funding they have and limited personnel they have to really work to find creative ways to help people navigate problems. 

And this technology is a game changer for courts who want to deliver better service to those people who have problems and they’re going to have to navigate them and they’re not going to be able to pay lawyers to help them, and to the public who needs that help. So I find this to be one of the biggest areas where I’m excited about what kinds of change it might bring to the legal profession. 

There’s actually at least one court—maybe there are more, but there’s one that I’m aware of in Maricopa County, Arizona, where the court’s creative coordinator has built his own custom GPTs (again, based on the technology that OpenAI has—the GPT framework) to produce chatbots to help self-represented parties with different legal problems. And those are actually not that hard to build. I mean, you can build them yourself and they can provide real help to people. So I think if it allows courts to provide much better service and information to the people they serve, and it allows the people they serve to get much better information, we might really grow public confidence in our public justice system. And to me, that’s the whole ballgame, because the rule of law is just a set of ideas. And if people don’t have confidence in it, it doesn’t hold. What about the business of law—sort of private practice? Law firm leaders: why do you think they should care?

Jen Leonard: I think law firm leaders should care deeply, mainly because of how their business is structured and the way they serve clients, the way they generate revenue, the way that they train and develop new lawyers. All of this will be impacted pretty profoundly, I think, by a technology that works as easily as it does (and will in the future) with language. If you think about the law firm business model, there are a couple of elements of it that conflict directly with what the technology is able to do.

The first, of course, is the billable hour. And we’ve been hearing for our entire careers about the demise of the billable hour, and everybody says you’d be a fool to bet against it. I’m not betting against it; I think it will always remain part of a pricing portfolio. But it is a model that is rooted in inefficiency. It doesn’t necessarily reward you for doing something very quickly. And actually, if you look at the trendline, there was a recent Thomson Reuters report—I think the Law Firm Financial Index report (and I could be wrong on the name)—but a TR report came out that showed that we’re actually at the bottom of a 20-year decline across all timekeepers in all Am Law 200 firms, in all practice areas, of the number of hours billed per timekeeper per year.

Those numbers have been on the decline. The thing that has offset that decline has been the rise in rates, so the revenue continues to grow over time, but those hours were already under pressure and they dipped after the Great Recession, and that dip never recovered. So when you’re basing your work on a billable hour, that creates vulnerability with a technology that does things very efficiently.

The other element of law firm business models is their leveraged labor. Law firm partners come together to practice and they hire associates of all levels to leverage what they’re doing by generating more revenue—by doing tasks that are more entry-level tasks—so that they can expand their ability to serve clients. But that leveraged labor model will continue to be under pressure as the technology becomes more capable of doing some of the tasks that junior associates currently bill for, which contribute in a massive way to the revenue that law firms generate.

Another reason is the clients. Unlike prior waves of technology where the firms were really the ones to make the investment (in a technology like e-discovery or some automated workflows), on both sides of the transaction now you will have lawyers using this technology. So a general counsel will be deeply interested in figuring out how to reshape their legal spend by learning how they can use the tools in-house. If you’re thinking about the two buckets that GCs send to law firms, there is the specialized bucket of really sophisticated legal questions (they don’t have the expertise in-house and need a sophisticated firm partner to help them), and then there’s the overflow bucket where they do have the expertise, they just don’t have the person-power to actually do that work. That second bucket could be absorbed by technology in-house, which would make the GC look really great to their C-suite if they’re able to spend their legal dollars on the more high-end, business strategy questions that the business wants to solve.

And the last reason is that we have an apprenticeship-based model of lawyer formation in the early years, where we spend a long time doing rote tasks in isolation from the broader representation, and those become the building blocks of judgment and strategy and client service. In a world where that time period collapses—where the tasks that make up that work go away or are changed—that has enormous implications for what your junior lawyers are doing, how your in-house talent teams are able to support them, and what the future leadership of your firm looks like. And we’re doing all of this post-COVID, where there was already a skills disruption in the early years of practice because we were doing everything virtually. So I think law firm leaders might be among the most impacted stakeholders in the legal ecosystem.

And I sort of touched on general counsel, but how do you think corporate legal departments are thinking about these issues?

Bridget McCormack: I think for exactly the reasons you just articulated, in-house counsel—corporate counsel—are absolutely focused on the ways in which this technology is going to change their relationships with their outside counsel and the expectations their clients are going to have for them. In many businesses, their clients are already using the technology in other parts of the business, or at least figuring out where they’re going to use it and how they’re going to use it. And so they have the same expectation for their counsel’s office: why can’t the counsel’s office use the technology to produce the same kinds of efficiencies? But I think it also gives them this opportunity to think about how they structure their work in-house in ways that allow them to better serve their clients. You know, the same way—

If law firms get this right, they can rebuild the way they serve their clients and the kinds of value they provide their clients. I think the same is true of in-house departments. They can rethink what their ops teams can do and how they do it, and therefore where they can put their dollars on those trickier, harder, bigger questions where they might not have the expertise on their team because it’s not something they need day to day. But they’ll now be able to go get that on the market with the flexibility of the spend they’ve freed up on the other side. They’re probably also going to be able to make a lot better predictions of where their legal spend is going in the future. The tools are absolutely going to give them some capabilities for planning that have been hard for lawyers and legal departments without it. So it’s going to give them a broader look forward that will, I think, allow them at the end of the day to just better serve their clients.

Jen Leonard: Yeah, one of the things I think is really interesting is how GPT-4 in particular is really helpful. I have found it very helpful as a thought partner for strategy development. And I don’t think—maybe most lawyers don’t think of it in that way, of it having that capacity—but I think you’re right. You know, we focus so narrowly on the research and the writing and the brief drafting and all of that. But there are much bigger ways to use the technology, right?

Bridget McCormack: Yeah, I mean, I know some general counsels’ departments that are doing things with their billing using the technology, and it’s allowing them to learn things about their own workflows that were just harder to know without this kind of advancement in the technology. So one of the places you and I have both spent a lot of time is in law schools. And one of the places I think there’s probably going to be some growing pains is in legal education. And I wonder what your thoughts are on how law schools will need to adapt—if they’ll need to adapt—and what the technology will mean for legal education and law students more generally.

Jen Leonard: I think I want to build on your point about the opportunities that this creates for different stakeholders. We’ve been in sort of a very steady-state world in terms of which schools’ graduates are hired by which employers, and what the approaches are to legal education. It’s very, very uniform. It’s a hallmark of American legal education that you know what you’re getting, essentially, in terms of what the students have studied in law school, including their legal research and writing classes. I think we’re in this interesting moment where employers, for the first time, will have to look under the hood of different schools’ curricula to see which schools are actually adapting their curricula to respond to changing skills needs. And I think that creates real opportunities for schools that have been building a muscle for a while at integrating technology, design thinking, systems redesign, professionalism and client focus, to emerge as real leaders in the employment market. 

And I know you and I are fans of a few programs in particular—Suffolk Law School’s program, Vanderbilt’s program. I think it’s an interesting time for those schools to really rise to the fore and be leading the development of future curriculum. I’m also excited. One of the things I personally struggled with as a law student (if you didn’t know anything about the law) and that I find disappointing in the legal education model is the lack of formative feedback that’s customized for new law students. These are really complicated topics you’re learning in the first semester of school, in an environment that’s not always comfortable. And you don’t really know until you take the final exam and get a grade a month later (that determines a lot of your job prospects at the outset) whether you understand anything. 

So I’m excited at the idea that we can put into the hands of students their own ability to create customized GPTs that are trained on their professors’ sample exams, outlines from upper-class students, hornbooks, those kinds of things, and test against that in real time to figure out if they’re actually learning what they need to be learning. I think there are well-being benefits, there are diversity and inclusion benefits, and there are just straight-up learning benefits that can come from that. 

So I’m really bullish on some of the opportunities. I think there will be a talent crisis in all organizations—and maybe law schools specifically—in finding people who are connected to the market, to the practice, to what people will be doing and have the ability to translate that into curricula and teach it with impact, in a world where those skills will also be very valuable to law firms. And I think we’ll see a lot of talent wars. We’re already seeing them unfold in the law firm space. But what do you think?

Bridget McCormack: Yeah, I agree with all of that. I do think that there is going to be an opportunity for a reshuffling of the way law firms think about different law school programs, and law school programs that are able to really take advantage of this technology—both to provide better education to their students and then education about the technology to their students—can probably have a moment here where they can market their students to law firms that might not have spent a lot of time there before. So I think that’s exciting. I like when innovators in any space get rewarded, and I think innovators in legal education will be rewarded in the coming years. So I’m in favor of that. 

I love the opportunity for individual feedback and tutoring, especially in a setting where some law students will have a very hard time saying it’s hard for them to understand what’s going on. I was one of them. I didn’t have any lawyers in my family when I went to law school. I went to NYU Law School, which was a great law school. But I sat in most of my classes and felt like they were speaking a language I didn’t understand. I did the best I could, but I was definitely not confident enough to tell anyone that I wasn’t sure how I was doing. So I was just going to find out after that exam at the end of the semester.

Bridget McCormack: And I think we know from lots of other disciplines now that that’s not the best way to learn. And so I do think the changes that might come for law students—who I have a lot of empathy for, having been one of them—are pretty exciting. Again, another area where I’m a lot more positive than I am negative about what could be.

Jen Leonard: Yeah. And I think one of the reasons we’re positive—and I want to circle back on the judicial point—is that we’re a self-regulated profession. A lot of the educational model is sort of designed by… as a student, I didn’t feel like I was part of the design process or understood how I was being educated. And this technology really, I think, will force some reimagination among the people who have the power to change court systems and legal education. I wonder if you have thoughts about regulation. Of course, the practice is regulated at the state Supreme Court level, particularly the unauthorized practice of law. To me, it feels like we’re about to enter this era of whack-a-mole for state Supreme Courts that may not be prepared for the implications of this. But it also puts in the hands of regular people technology that could be really helpful in expanding access to justice. So what do you think regulators should be thinking about?

Bridget McCormack: I think it’s going to be a fascinating ride to watch how that all plays out. I saw just today in—I don’t know if it was Law360 or ALM—that there’s a new lawsuit against LegalZoom for the unlicensed practice of law in New Jersey. I was like, “Oh, how quaint. LegalZoom!” It’s sort of like kicking it back to the 1980s. I thought we had all gotten comfortable with LegalZoom at this point. People should have some legal information if they can’t afford it, you know.

But apparently there’s a new lawsuit accusing them of the unauthorized practice of law. That seems like it’s going to be very hard to keep track of in an era where you can build your own GPT even on accurate legal information. Or you can just go to ChatGPT and ask and you’re gonna get a pretty good answer for a lot of legal questions. And that answer is only going to get better with each model. I don’t know how the technology this time doesn’t run over the regulators. So if I were in the position of trying to figure out how to regulate it, I think I would want to take a very ambitious approach to letting some flowers grow and seeing what might happen that could help the people that need help. 

So I do think that regulators who try to operate the way they’ve always operated are going to have a very hard time with this technology. So I think it’s going to call for a new approach by regulators. But that, too, is another reason why it’s going to be fun to have these conversations. I don’t have answers for them. My only answer is: what worked yesterday is not going to work tomorrow. And so it’s time for everybody to roll up their sleeves and figure out all of the positive things we can build and fix together because of this technology. And regulators should be part of that conversation. It should be good news for everybody. There’s a lot yet to learn.

Jen Leonard: I think that’s a great note to end on for our first episode. Both of us have a positive view of it, recognizing all the bumps ahead and the challenges, but we’re going to be able to do things that should have been done for a really long time that just weren’t possible to scale in this way, and give power to people to make their own lives and their own education and their own development better. And I find that really, really exciting. I also think in the private sector, I’m excited about some of the well-being benefits for experienced lawyers and some of the abundance mindset that I think law firm leaders could adopt. We are tethered to 24 hours in a day, and it seems like we end up working most of them because that’s our revenue model in the private sector—you have to work as many as you possibly can. So imagine a world where we are less connected to that model and our value is more aligned with the ability to serve our clients really well and help them solve problems. I think that’s really exciting.

We really just wanted to create this space to help other people stay up to speed and to help ourselves understand the unfolding landscape around us in real time. We don’t really have a roadmap for the show, but we are working to build resources together as well to educate the broader community. And so I’ll just leave it there for our first conversation. But any closing words that you have, Bridget?

Bridget McCormack: No, I think you said it perfectly. We don’t know exactly where it’s going to go, but that’s kind of the point. We’re excited to talk about what’s going on and what we’re learning and what we’re thinking about, and inspire others to do the same in the legal profession—just because we see so much to be excited about and hopeful about, to make a better profession and a happier profession. I’m also excited about the other things we’re going to build together. Like you said, we’re going to work on some other materials, courses, opportunities to help lawyers who are trying to think about the best way forward and through all of this. It’s going to be really fun. I’m looking forward to it.