Is It Unethical for Lawyers Not to Use AI?

Summary
In Episode 22 of 2030 Vision: AI and the Future of Law, co-hosts Jen Leonard and Bridget McCormack challenge conventional thinking around legal ethics and artificial intelligence. Historically, conversations around AI in law have focused on risks—bias, hallucinations, confidentiality. But Leonard and McCormack argue it's time to broaden the ethical lens: Could it soon be unethical for lawyers to avoid using AI altogether?
Drawing from real-world anecdotes, headline-making lawsuits, and the alarming predictions of the AI 2027 Report, they unpack how generative AI is already transforming legal workflows—and why responsible adoption may not just be smart, but ethically essential.
The Shift: From AI Risk to AI Obligation
When ChatGPT first emerged, legal headlines centered around ethical missteps—like fabricated case citations. That narrative, while still valid, is only one side of the coin. This episode explores a growing counter-narrative: **that competent legal representation may soon require the use of AI**—not just to cut costs, but to deliver higher-quality, more equitable legal services.
Real Cases, Real Impact
Leonard and McCormack dissect two Georgia-based medical malpractice lawsuits where generative AI played a quiet but crucial role. In one case, ChatGPT helped a lawyer craft a compelling closing argument. In another, it unearthed a contradiction in an expert witness’s prior statements—an insight that might have remained buried without AI’s deep search capability.
These were not science fiction scenarios. These were lawyers using AI as an intelligent assistant, doing better work for their clients. And that, the hosts argue, may increasingly define the ethical standard.
Key Takeaways
- The Ethics Paradigm is Shifting: Competence and diligence may now include knowing when and how to use AI tools—especially when they improve client outcomes.
- Real-World Use is Already Underway: From assisting in trial preparation to improving expert research, AI is quietly reshaping how law is practiced today.
- The AI 2027 Report Adds Urgency: With credible researchers predicting superintelligence within a few years, the legal field cannot afford to lag behind.
- Clients Expect Innovation: Informed consent is important, but most clients will welcome powerful, secure tools that enhance representation and reduce costs.
- AI Narrows Resource Gaps: Especially in solo or public interest practices, AI levels the playing field by offering access to insights once reserved for large-firm litigators.
Final Thoughts
The legal profession often moves slowly, cautiously—and for good reason. But the rise of generative AI represents a true inflection point. The technology is not just evolving rapidly; it’s proving itself useful in ways that can materially benefit clients.
The ethics conversation must evolve accordingly. Risk management remains critical, but so does the duty to deliver the most competent representation. As McCormack noted, lawyers who don’t explore what AI can offer may soon find themselves outperformed—ethically, competitively, and practically—by those who do.
The future of law is not about choosing between tradition and technology. It’s about integrating both with integrity, foresight and a client-centered mindset.
Watch Episode 22
Transcript
Below is the full transcript of Episode 22 of The 2030 Vision Podcast.
Jen Leonard: Hi, everybody. Welcome back to the 2030 Vision: AI and the Future of Law podcast. I'm your co-host, Jen Leonard, founder of Creative Lawyers, here with Bridget McCormack, president and CEO of the American Arbitration Association. Hi Bridget, how are you?
Bridget McCormack: I am good, Jen. It's good to see you. I miss you.
Jen Leonard: I miss you too. Where are you today?
Bridget McCormack: I am out in West Michigan in my new home office that I am my own tech support for—that used to be our kids' club. If folks listened to our last conversation, they know that story. And we'll see how the Wi-Fi works based on my setup.
Jen Leonard: I love it. Well, it's great to see you, and I'm happy that you're stationary for a little while. You're always traveling. And we are going to have an efficient episode today. We're excited to dive into the topic of whether we are about to have an era of more nuanced conversations about ethics and lawyers and generative AI. And we'll talk about two cases that made the headlines for lawyers using gen AI in their day-to-day legal work.
But of course, as we always do in each episode, we'll start with two other segments to get us started. The first are our AI Aha! moments—those ways in which we've been using generative AI in our own lives that we find particularly interesting. And then we'll give an update on What Just Happened in the broader world, something interesting that our legal audience might have missed in the midst of their busy lives. So maybe you can kick us off with your AI Aha!, Bridget.
Segment 1: AI Aha! Moment: Old Trucks, Porcupines and Websites, Oh My!
Bridget McCormack: I don't know if mine this week is that interesting or useful to lawyers specifically, but my husband and I were up north in Northern Michigan this weekend and walking around this property that we own up there. We have this big wooded lot with lots of hills. And every time we've walked it, we find new parts of it. And this time we found two old cars, like one old truck and one old car that were both, like, obviously had been there for decades based on just looking at them. And so of course to figure out exactly what the make and model were, I used my ChatGPT app and took photos of different parts of each of them so that ChatGPT could identify—like we were really curious about, what make and model truck it was and what make and model car.
It was a Studebaker, by the way—the truck—and the car was a Dodge. I forget, but it literally was able to identify, like, you know, between '47 and '49 basically for each of them and all these other details about them.
So then of course, we decided we were going to have ChatGPT to do our nature lessons throughout the rest of the woods. Like we were trying to figure out what the difference was between the different evergreens or firs or whatever. Like, what's the difference between this one and that one? And not only would it—you just take a photo and upload it—it would tell you about it, but it would also then tell you how to make sure, like, you have to roll the needles in your fingers. If they roll, it's this. If they don't roll, it's this other thing. It was like this interactive ChatGPT was walking me through the plants in the woods and teaching us about them.
I was really hoping we have Trillium back there, which is a gorgeous flower that does grow in that kind of area. But it's a little early for Trillium, but I saw these green things popping up, and I was trying to figure out if they were going to be Trillium. And ChatGPT is amazing. It's like, "Do you want me to show you a photo of what Trillium would look like at that phase?" And I'm like, "Yeah, show me the photo." And ChatGPT says, "I can produce a really realistic photo." And I was like, "Go for it, ChatGPT."
So anyway, we had a woods tutor. It was pretty fun.
Jen Leonard: That's so cool. Were the cars in decent shape or had they been there for a long time? Did you ask how long they'd been there?
Bridget McCormack: They've clearly been there since, I don't know, I mean probably four or five, six decades. Like a very long time. One of the other things I got—there was a big pile of clearly some kind of animal scat, but not scat that was recognizable to me. And so of course, I asked ChatGPT, "What kind of scat is this?" And it was porcupine scat, which apparently is knowable because I don't need to get into the details of the scat, but pretty amazing.
Then of course I got terrified because I was like, well, if there's a porcupine under this truck—we're in porcupine trouble. My husband was like, "What do you mean by that? Are you going to touch it?" I was like, "No, but it shoots you with its needles." He's like, "It does not shoot its needles. That's not what happens." Of course, I asked ChatGPT and he was right. They do not shoot their needles. I must've seen that in a cartoon at some point.
Jen Leonard: We're in Porcupine Trouble.
Bridget McCormack: Anyway, what about you? Do you have a good one this week?
Jen Leonard: It wasn't all that creative, but it was sort of mind-blowing. So we are rebuilding our team website, and this is like the type of task that I just never seem to have time to do. And we got the draft wireframe and sample page—like we can go in and look at it—and we just have not had the time to sit around and make a list. And I blocked off like two hours to sit through it. And it's a small website, but even for a small website, there's like 25 subpages that you click onto. And I started making a bullet point list of all the changes. And then I thought, I wonder what this would be like if I tried to use ChatGPT voice mode to go through it.
So I opened voice mode and told it, "I'm looking at this website. I want you to act like an assistant, and I'm going to tell you as I go what I want changed." And it was like, "Great, I'm ready to hear your feedback." I said, "Okay, I’m on the homepage. I don’t like this font. I want it to look more modern. I don’t like this picture."
But it also, like, you wouldn't be surprised to know—I would say, "You know, I don't really like how this sentence is worded. I'd like it to sound a little bit cleaner and simpler." And then it would say something like, "Got it. You want it simpler. How about this?" And it was like, perfect. Like that sounds amazing. And a couple of times, you know, like I think the awkward thing is like I'm not ready to say anything again yet. So I was hesitant to use it for that reason.
Because I'd still be reading things, but I just kept saying to it—it would say, like, "What's your next correction?" And I would say, "I just need a minute. Like I'm still reading." And then it would say, like, "That's fine, I'm here whenever you need me." You know? And then I would just, once I was ready, say something else. And then I got through the whole website. And then at the end, I was like, "Can you take this and turn it into a bullet point list of page-by-page corrections?" And it was like, "Absolutely. Here's your page. Would you like it converted to a Google Doc?"
It was unbelievable. It's not that mind-blowing in terms of what it did. It's just—I would have spent so much time doing that. And then I would have had to type out the bullet points. It was just so cool.
Bridget McCormack: I mean, you're right. It is the kind of thing that we know it does well, but that's the kind of—like I'm still having these breakthroughs, right? In my regular workflows where it occurs to me, like, "Wait a minute, why am I doing this alone? I have my helper over here." But that one you just described is amazing because it can keep track of it. It's like actually working with a coworker, right, who's kind of listening to you and keeping track of all of it and then producing a perfect final product. That's amazing. That's really smart. I like that.
Jen Leonard: It's always willing to go the extra step for you. So it's like, "Oh, I don't really like the way this is laid out, but I'm not sure exactly why." And then it would be like, "Would you like me to propose some different ways that we could lay this out?" Like, sure. Because I have no idea what I'm doing.
You and I always talk about it in presentations. It is the smartest, most eager intern you've ever had that is just looking for your direction.
Bridget McCormack: Totally.
Jen Leonard: That was my one. My other one was not gen AI, I don't think, but I used ElevenLabs, which I'd heard about from some friends, which is just an app that you can use to convert text to audio. So getting ready for our conversation today, I dropped the URL for the AI 2027 Report, which we'll talk about in a minute, into ElevenLabs and listened to it last night while I was lying in bed. And that was so easy and delightful. So I'm definitely going to do that more. And I think it's free. I didn't sign up for anything, but it was very fast. And I think I'm going to start, like, loading things in. It's sort of like NotebookLM, but not as fancy. But I could load things in and go for a walk and listen to them.
Those were some really cool AI Aha!’s this week, even if they weren't, like, mind-bending technology. Like you said, just finding new and interesting ways to use the tools that we have.
Segment 2: What Just Happened: AI 2027 Report Predicts Superintelligence—and Legal Upheaval
Jen Leonard: So, today in our What Just Happened segment, which is where we talk about things from the broader macro tech development world and share them with our legal audience. We're going to dive into what is called the AI 2027 Report. And I’m going to turn it over to you, Bridget, to sort of summarize what this report is. It’s basically a futuristic prediction of the next few years of AI development, but I’ll let you take it from there.
Bridget McCormack: My attention was drawn to this report because the Artificial Intelligence podcast hosts, Paul Roetzer and Mike Kaput, were talking about it on Tuesday. And then on Friday, the Hard Fork podcast hosts were talking about it as well. In fact, they interviewed one of the authors of this report. And I can see that he’s also been interviewed on basically every kind of tech podcast in the last week.
So it’s just kind of making the rounds in the AI community. And therefore, I just got interested enough to try and pull it up and take a look. And basically, it’s this very detailed scenario forecasting document that predicts the ways in which AI might evolve over the next few years.
And they predict this very dramatic takeoff around 2027—which I understand from one of the interviews (and I can’t remember which one)—was actually pushed back a year from their original thinking. Like, originally, I think they thought everything that they’d predict happening in 2027 was going to happen a year earlier. But after the work they did, they pushed that back a year.
The lead researcher—or one of them—was an OpenAI researcher. I didn’t know this but learned again from one of the interviews that he, Daniel Kokotajlo, he left OpenAI when he became concerned about safety issues. So I don’t know if that should influence how we think about his predictions or not, but that was interesting to know.
Now he works at this nonprofit located in Berkeley, California, called the AI Futures Project. And he and his co-authors believe that AI could surpass human intelligence by 2027, and that the transformative changes that will result from that are ones that nobody is planning for. Basically, I’m summarizing here, but they’re tumultuous and really impactful, and we are thinking very hard about what that’s going to mean.
I think the way he puts it is, within this decade, there will be enormous changes that exceed those of the Industrial Revolution. And I think we’ve sort of heard this in other places before. I mean, it sort of reminds me of Dario Amodei’s “Machines of Loving Grace” from a few months back. And we’ve had a few essays from other leaders of some of the leading labs that say similar things about the trajectory.
So the trajectory, in a way, isn’t surprising. I think one reason why people are taking note is because Daniel and some of the other people on his team have a very good history of predicting AI outcomes. And they have for many years now. So their future forecasting has been right historically. And because they’re not associated with a lab, you can’t really give it a discount of them trying to hype their own technology, right? So they're really just trying to get it right.
In his interview on Hard Fork podcast, he said he’s really eager for people to tell him why he’s wrong. There’s like a portal where you can send ideas for where you think they missed a step in their very robust forecasting methodology. And they’ll think about them and respond to them. And they wanted it to be the start of a conversation and one where they hope they're wrong about the bad outcomes, which is part of what they've kind of mapped out. But they really wanted to start a conversation. I think that's a pretty decent explanation of what it's about. Does that seem right to you?
Jen Leonard: Yeah. That was excellent. And like you said, I think the lead author had written a piece before ChatGPT emerged, basically predicting the entire ChatGPT moment. And I think I heard on one of the podcasts that the only thing that was flawed in his original projections was that he anticipated it would take longer than it did in real life.
So he was actually more conservative than real life. But most of what he predicted turned out to be true—which is why people are paying attention to it.
And the other pieces that struck me: one was this idea of an escalating war for AI supremacy between the U.S. and China. And this paper came out before the latest round of tariffs, which is interesting, because a few weeks before this paper came out, I listened to another podcast with a different expert in geopolitics and AI. And their theory was that the major powers in the world would ultimately be incentivized to collaborate to shape the future responsibly as this AI is becoming more and more powerful. And obviously, it seems to me that that's highly unlikely, more unlikely now than it was even when this paper came out
But the other thing that it made me think, so it talked about the political ramifications and this accelerating arms race between the US and China.
The other thing that struck me is how the report highlights the growing public backlash as AI causes more job disruption. If you look at the dashboard on their site—which is super visually compelling, by the way—they show a data point where currently around 1% of the public sees AI as a major problem. But in two years, when they predict superintelligence to emerge, that number jumps to 20%.
We’re already seeing instability in our systems. So that kind of projection seemed important to me. And the point of the report wasn’t to be perfectly right about everything, but to raise awareness and start conversations.
Bridget McCormack: Yeah, I think that’s right. The thing that really made sense to me—even though I’m not a technologist—was the idea that once AI is better than humans at coding, it can supervise other AI coders. And at that point, progress accelerates rapidly. That sort of made intuitive sense to me.
Jen Leonard: Agreed. And for our legal audience, I thought it was interesting that they view research agents as one of the key steps toward superintelligence. That feels squarely in the legal wheelhouse. Legal work is so research-driven.
I was in San Francisco recently, and it’s funny—they say you can tell what a city is focused on by looking at its billboards. In Philly, it’s personal injury attorneys and jewelry stores. But in San Francisco? Every billboard is about AI. It feels like the future.
So my takeaway is that this stuff is moving fast. And the people closest to it are starting to feel the urgency. But the rest of the world may not feel it for a little while.
Bridget McCormack: Yeah, I agree. Even if the forecasting is right, there are still lots of bottlenecks—legal, societal, human—that could slow full adoption. But it’s important that people know this report is out there. And that it didn’t come from a lab trying to hype something. That’s part of what gives it more credibility.
Jen Leonard: And I think for our audience—many of whom haven’t even heard of this report—it’s useful just to know these conversations are happening. Like, there are people who believe that in two to three years, this technology will be superhuman. It’s wild to think about that when so many lawyers still aren’t talking about AI at all.
Segment 3: The Ethics Evolution: Why Lawyers Must Embrace AI
Jen Leonard: That brings us to our main topic, which is what is happening in the land of the lawyers. And we wanted to talk today about the two sides of the ethics coin. So ever since ChatGPT emerged on the scene and poor Stephen Schwartz cited those hallucinated cases in the brief he filed with federal court, almost all of the conversation around ethics has been focused on the ethical risks involved in using AI—hallucinations and biases.
But we’re starting to see data—and we talked about it on a recent podcast with the VALS benchmarking study—that AI is increasingly as accurate as we are at producing new content. And now, today, we were going to talk about whether we're going to start to see a shift in the conversation about ethics towards a world where it will soon become unethical not to understand how to integrate these tools into your day-to-day practice.
And we have two examples—two case studies—where the headlines might lead you to believe that AI was really the transformative piece of the litigation involved in both of these cases. On a closer read, it's probably a little bit more mundane than that. But I think it's still important for people to be paying attention to.
So maybe I can turn to you, Bridget, and ask you to guide us through the two different cases that have come out with some AI influence in their lawyering.
Bridget McCormack: Yeah. What’s interesting about both is that the AI had a place in the headline about each of these cases. One—both were med mal cases. And both were in Georgia—personal injury practices in Georgia. One where there was a $25 million verdict, and one where there was a $7.2 million verdict. And I don't really think that the particular med mal underlying issue is what was most important about them.
But the headline in each case was like, "AI wins a $25 million verdict," which isn’t exactly what happened. And maybe I’m exaggerating the headline. But I think it overstated the role of AI in each of them. On the other hand, in each case, the use cases—the way the lawyers used AI—were the ways I think a lot of people are using it in regular, everyday life.
They weren’t using it for researching or presenting AI-written documents to a court, which seems to be where everybody has the most concern, but rather to help them present their case. So in the $25 million case, the lawyer used ChatGPT to help her think about how to talk about the particular injury to a jury. And like many med mal cases, there's a sad story, and how you talk about that particular sad story to a group of twelve people who are not lawyers—but are having to make some legal judgments around facts—is the kind of thing that, frankly, the technology would be great at.
I'm not surprised at all that she was able to get good help from the technology to think about how to talk about the underlying injury in that case.
And in the other case, the lawyer used ChatGPT again to help prepare for the expert testimony on the other side. I believe he said they were even able to find a paper where one of the experts on the other side had said not-X when he was testifying to X. They literally found a smoking gun statement by an expert that they would never have found on their own.
And you and I know from the work that deep research does, no matter how good we are at legal research or Google research or any other kind of research, we're no match for what the capabilities are that this technology has now. So that too feels like a very “Yes, for sure.” Like, why would you not try that? You’d do all the things you normally do, I think. But then why wouldn’t you get some help thinking through the expert testimony you’re going to be facing—especially since you have their reports, right? And you can go figure out in advance what else is out there that you might want to ask about in a really targeted way.
So in neither case was there anything about the use of the technology that I think would cause anyone to say, “Oh no, there’s an ethical concern in using it.” And I think, to your point, it’s probably the opposite. Once you know that it can help you present a better closing argument—especially around a tricky or sticky set of facts to talk about to people—or once you know that you can get help figuring out how to counter an expert, which at the end of the day is what the whole case is about in a med mal case—then isn’t it unethical not to use the technology that you know is available to you?
It is so interesting. I mean, I was doing a prep session earlier today for a presentation I’m doing in June to a group of lawyers in a practice area where they are really late to the party. But they’re trying to figure this out: “What does it mean for our practice area?” And the folks organizing the conference kept focusing on, “We need to know all the ethical problems and all the ethical risks.”
And I was thinking, I mean, I’m going to probably sell you on both sides of that. Yes, you want to be really careful about what you do with any confidential information. I think we all know that lesson now. And you want to check the technology’s work, because it will give you an answer even when it’s not sure. But you might have to use it to provide competent representation once the technology can do things that you simply can’t do as well.
Jen Leonard: I think our shared worldview on the legal profession writ large is that we’re not client-centered enough, not public-centered enough, and we’re very lawyer and judge-centric. And I think if you were to ask the average person paying her lawyer to represent her in a case like this—powerful technology exists, we could use it or not—would you like us to use it with safety guardrails in place? I don’t know many people who would say no.
I think it’s good that the ABA guidance came out and there’s discussion about having informed consent with your clients. But to me, that’s the whole ballgame. That’s why you’re there. And I think the $7.2 million verdict case to me stood out as just—you’re representing people and you’re seeing this tiny little piece of the ocean through discovery and through your research that you’re able to do, but there’s just these depths of data and information that you haven’t had access to—or new ways of thinking about things or sharing things, like the $25 million verdict.
And I don’t know why you wouldn’t want to plumb the depths of the ocean and find all of these different things that are like unlocks for lawyers.
Bridget McCormack: I feel like we had this one sort of counterargument to all of the risks—all the ethical risks—which was fees, which even the ABA formal opinion flagged for lawyers. And we’ve heard from in-house counsel who say, “If you can do the work with the help of this technology at a fraction of the cost, there’s an ethical rule that says you have to do that.” And that I feel like lawyers have had their heads around for some time, which has been the primary reason why I think lawyers are like, “I guess I’m going to have to figure it out.”
Like, “I’m going to have a duty to—for cost reasons.” But also just to be competitive in my market. If my neighbor’s figuring it out and can offer the same services significantly cheaper, if I want to continue to compete, I’m going to have to use it. But this is just an extra layer to that. It’s not only that you can do it and save money—but that’s true here as well—it’s that you can do it better. That’s the difference, right?
When I used to do jury trials, either before I was on the faculty or when I was working with clinic students, I hardly ever had clients who had significant resources for litigation. Sometimes in our post-conviction Innocence Clinic litigation, we were able to recruit volunteer experts—but it was always volunteer experts. There’s a tremendous upside for lawyers who are working on really important cases but don’t have big budgets—whether they’re at small firms, in solo practice, or in post-conviction Innocence Clinic practices—where they really don’t have the budgets for experts or to counter experts that traditionally you needed. I’m not sure you do anymore.
Jen Leonard: Same. And we’ve been saying from the beginning—I thought it was fun that these were two plaintiffs’ lawyers who benefited from this. Not that they were enormous firms on the other side in these two specific cases. But you can see how going up against a behemoth firm—if you’re able to source some of that research or shape your closing argument in a way that you wouldn’t have otherwise—it just gives you such a leg up.
And so I’m constantly on the hunt now for stories like this to share with lawyers. I think it really makes concrete what we’ve been trying to say in a theoretical way. And I hope more lawyers start to think about these tools as augmenters of their abilities.
Bridget McCormack: Yeah, you just gave me an idea. These two specific examples are so great because they are concrete. And lawyers will be able to relate to them right away. Right? We always hear when we present to lawyers, like, “Okay, I get it. It’s definitely a disruptive technology, but what do I do with it? What do I really do with what I heard today?” And I like having these two examples.
Jen Leonard: Well, we will wait and see how the ethical landscape changes. But for those lawyers out there, we hope this gives you inspiration for the many ways you can incorporate AI—in ways big and small—into your practice. And we’ll look forward to seeing all of you on the next episode of 2030 Vision: AI and the Future of Law.