Summary
In this episode of 2030 Vision, Jen Leonard and Bridget McCormack explore the evolving role of artificial intelligence in the legal profession. The conversation dives into the American Bar Association's (ABA) formal opinion on generative AI, addressing key ethical concerns and the varying approaches taken by state bar associations. They discuss the importance of understanding AI's capabilities, the necessity of prompt engineering, and how generative AI can enhance access to justice.
Bridget & Jen focus on the ethical implications for lawyers, highlighting the guidance emerging from state opinions in Pennsylvania and Minnesota. Discover how AI is transforming legal practices, the potential for regulatory sandboxes, and what this all means for the future of legal technology.
Key Takeaways
- AI is revolutionizing the legal landscape and practice.
- Lawyers must understand AI’s capabilities and limitations.
- Prompt engineering is crucial for effective AI use.
- The ABA’s opinion on generative AI stresses existing ethical obligations.
- State bar associations are issuing guidance on AI integration.
- Pennsylvania raises concerns about the unauthorized practice of law.
- Minnesota is considering a regulatory sandbox for generative AI.
- Generative AI can improve access to justice and reshape fee structures.
- Legal professionals must embrace AI while maintaining ethical standards.
Transcript
Jen Leonard: Hello everybody, and welcome to the next episode of 2030 Vision: AI and the Future of Law. I'm your cohost, Jen Leonard, founder of Creative Lawyers, and I am joined by my wonderful colleague and cohost, Bridget McCormack, president and CEO of the American Arbitration Association. We are here, as we are every episode, to talk all about the changing landscape of artificial intelligence and its impact on the future of law. So welcome, everybody, to our discussion today, which will be about some of the frameworks happening within the profession, particularly at the state bar level and out of the American Bar Association, to start to put some contours around how lawyers should be thinking about integrating the use of AI in their practices.
We'll talk a little bit about some early distinctions across states in how they are thinking about lawyers’ use of artificial intelligence, and share our thoughts about how states might want to consider the use of AI in the future. Hi, Bridget. It's wonderful to see you.
Bridget McCormack: Good morning, Jen. It's great to see you too. I'm excited for today's conversation and also for so many other things we're conspiring about in this area in the coming months. So it's great to see you.
Jen Leonard: Absolutely. There's so much good fun to be had right now finding better ways to do things. As we do every episode, we'll start with our Gen AI moments since our last recording—the moment in which we each used generative AI to do something that felt magical to us as we're learning to use these tools in our day-to-day lives. Then we'll share two definitions that might be new to listeners (ones we're learning about as well) so that when you hear them in the ether as the conversation unfolds, they'll make more sense to you. And then we'll dive into the discussion.
Gen AI Moments
Jen Leonard: So, Bridget, can you kick us off with your Gen AI moment for the week?
Bridget McCormack: Yeah, absolutely. As you know, Jen, my husband and I bought this very small little cherry orchard in northern Michigan that is currently being farmed by the farmer we purchased it from. But eventually we want to figure out how to farm it ourselves, and it seemed to me that I'd better start figuring out what that looks like. Obviously, there are the technical aspects of it—what does it mean to farm cherries?—but then there's a whole bunch of equally important business questions, tax questions, and regulatory questions. So I decided I had to start kind of putting together some planning for my learning.
And so I have worked with both Claude and GPT-4 to help me figure out a plan for my own education around all of these topics. And it's been kind of fun. That use case probably isn't that surprising, and my guess is lots of people are using the technology that way now. But the thing I wanted to focus on today was that I wanted to ask the models how they thought they could be most helpful to me in this process. So I said to GPT—it's so weird that I talk about GPT like it's my cousin or something—I said, "How do you think I can best use you as I go through this education process, especially since you don't always remember what we talked about the last time?" I wanted to figure out what were the ways that I didn't fully understand how to use it.
And surprisingly, GPT... I expected it to point me toward a Microsoft product for keeping track of my learning, just because, you know, OpenAI has this relationship with Microsoft. But it actually gave me a choice. It told me Google Docs would work just fine, or any other note-taking app, or the Microsoft suite. And I thought that was pretty interesting. But the experience of asking the models about the ways that I could use them to help me throughout the process was kind of new to me, even though I think I've read others recommend doing that. I don't think I had ever actually done it before. And I now want to say that I highly recommend—if you're frustrated with the output—tell the model, "I was kind of hoping for [whatever it was you were hoping for]. How can I get you to respond in that way?" It's really interesting how helpful they want to be.
Jen Leonard: I think that's such a good tip. I've been running some workshops with lawyers to get them thinking past, you know, how do I use this in my practice tasks? (like, how do I use it to draft a lease? how do I use it to draft a brief?) and thinking more creatively in open-ended ways about the business—about business development and marketing and different, more creative applications.
And so a lot of those workshops involve lawyers sitting down in a room together with their laptops, using a public frontier model and an anonymized hypothetical situation. And I've noticed that—like you would expect, like most of us do—the initial reaction is sort of sitting and wondering what to do with it, because that's how we engage with software and technology traditionally. And then they'll talk to each other and say, "What should I ask it? What can it do?" And I'll say to them, "Ask the LLM what you should ask it. Ask the LLM what its capabilities are." But it's a weird switch, I think, because it's so different from how we have interacted with technology so far. So I think it's a really good tip for people to be aware of. I do it now because now I'm much more aware of it. And we'll talk about this in a future episode, but there's a new model out from OpenAI.
And when I found out it came out, I asked it, "What do you do differently that the last model didn't do?" And it had a whole nice list of things. It's such a jumpstart. So I think it's a really good tip.
Bridget McCormack: Yeah, it is a big transition though, in terms of at least most people's history of working with technology. What about you? Do you have a fun generative AI moment for us this week?
Jen Leonard: I feel like mine is not as insightful as yours, but it's further evidence of how much more efficient we can be by using these models in things that we're expert in or things that we do professionally. I've been using it a lot lately in developing facilitator’s guides for these workshops—so thinking through, you know, what are some open-ended problems that lawyers might face? It will generate, "Here are ten different ways that they could use this." And I'll think, okay, you know, four of those are really good ideas; the other six are gonna be eye rolls from law firm partners. But that's how it works really well right now—when you can apply your expertise—and then using it to draft instructions for worksheets that guide those exercises, because—I find one thing that is challenging when you're designing an exercise for the first time is empathizing with the person reading the instructions on a worksheet. You know what you want them to do and get out of it, but you don't know if that will translate. So I will work with it to draft the instructions, and then I'll say something like, "Imagine you're completely new to this task. What are some things in these instructions that could be confusing or not clear enough?" And it'll surface a few, and then it will redraft the instructions. So you end up spending a lot less time in the workshop sort of pausing to answer questions where it's not clear. And so I love that use case. It's great for professors, I think, when they're designing activities with students.
Bridget McCormack: That's really interesting. Are you collecting feedback from the people you're working with about it? And are you continuing to iterate on the models you're using—the worksheets you're using, I guess?
Jen Leonard: Always, and I'm always getting feedback. What I've heard so far from a few of these workshops is exactly what we were hoping to hear, which is: "I hadn't thought about this, because I'm thinking about this in the confines of, like, 'How do I use this in Westlaw? How do I use this in Lexis? How is this going to impact the billable hour in my particular practice area?'"
But now they're thinking more broadly, like, "How do I use this to generate more business? How do I use this...?" I had a lawyer recently participate in one of these workshops, and he was really excited because we were working on an RFP activity—how would you use this to reimagine RFPs to potential clients? And he said, "You know, I hadn't thought about this. I hadn't tried this out, but imagine what our lives would be like as lawyers if on Saturdays and Sundays, instead of having to sit with a blank piece of paper to do something that leads to business (but the exercise itself can be laborious and take up your whole day), we could do this fairly quickly, think more creatively, and then go do things with our lives on Saturday afternoons and Sunday afternoons." Like, that's the goal: to get people to move away from (A) narrow thinking and (B) fear, to broader, more creative thinking and excitement about what this could do to improve our lives.
Bridget McCormack: Yeah, that's super interesting and sort of a metaphor for a lot of learning around this technology. I mean, our friends at the AI Institute talk about the use-case method and the—I think they call it the problem-focused method, you know. And the use-case method is like, what can you use the technology for right now to just input into your normal workflows to make those more efficient. But I think the problem-solving method is where the real impact is, which sounds like you're already doing it with the people you're working with—which is like: how do we just reimagine this entire workstream or this entire problem that we're trying to solve now that we have these new tools? Like, let's not just pave the cow paths, right?
Jen Leonard: Right. And it's interesting, because I do want to get better at framing the directions, which I think is part of the approach, but the major approach is like a mindset shift so that lawyers really do feel encouraged to—I heard somebody say this recently—like, you're in a workshop, you're not going to break anything. Have some fun. Ask some things that are not "the correct thing" to do with it, and see what happens.
And I've seen that when they do that and they get into it, I've seen lawyers identify—like, they'll ask, "Okay, here's what the hypothetical RFP says, for example, and here are the practice group questions that they've asked us to think about." But then I've had people say, "Then I asked it back, 'What are some things that this GC might not be thinking about with respect to this project? Or what are some things that might also be on their plate that we could incorporate into this?'" And it'll generate some ideas that they hadn't been thinking about before. And so while they're having the conversations about, you know, "This could impact our business model in negative ways and we should be afraid of that," they're also sort of like, "Well, it also surfaced this thing that we don't currently offer through this practice group. But if we had time, we might be able to build this out as a service that's really in need, but nobody has this yet." And I think that's really cool. I love those conversations.
Bridget McCormack: Yeah, I think that's the sweet spot, at least for lawyers right now—thinking about what those new things are that you couldn't do before that you might be able to do now.
Jen Leonard: Yeah, all we have to do is control ourselves from filling in all the time savings with new tasks and new ways to make ourselves miserable. But we can still have hope. And another thing we like to help people understand are some of the strange terms and concepts that come along with learning a totally new technology.
Definitions: Prompt Engineering & GPTs
In each episode, we each share a term and explain what it is. So I'm going to start with you, Bridget—you're going to talk about prompt engineering.
Bridget McCormack: Yeah. I feel like prompt engineering might be one term that most people understand at this point. It feels like one of the earliest new terms people learn about when they start playing with the models. But it's the practice of crafting and then refining the input prompts that you give the large language models to produce the output. In other words, it's the particular questions that you ask when you want the model to give you some output.
We've all learned about this along the way, and there are actually lots of guides on prompt engineering now—even some from the companies that build these models—which are useful if you're just starting to learn about it. It's a way to strategically think about the questions, instructions, and context that you give the model.
All of that is to help it produce the answers that you're looking for. The clearer you can be—trying to leave out ambiguity, giving it background information to frame what you're looking for, telling it to think through its steps and show its work—the bigger difference it can make in the answers it produces. Setting limits is helpful. Giving the model examples of the kind of thing you're looking for is helpful. And then the follow-up questions you ask are sometimes almost as important as the original ones. That whole practice is what we're referring to when we talk about prompt engineering.
Jen Leonard: Yeah, absolutely. And the point you made there about saying things that sound a little weird but improve the performance is, I think, something that's surprising to people. Like, I've seen Ethan Mollick post that if you tell the LLM to go back and read the question again—sort of like you would to an elementary school student—it gets better results. So some of the weird things you can say to these models to improve their performance are very odd.
Bridget McCormack: Yeah, I mean, I don't know if this is still true, but there was a period where Ethan and some other scholars—or maybe he was just reporting on others' research—showed that if you told the model it was May, it would do a more thorough job than if it thought it was December. Because I guess in December everybody kind of gives their work short shrift, since we're trying to get off to our holiday. So that was fascinating. I don't know if that's still true; I assume things like that get better and better as time goes on and it doesn't cut corners just because it might be December 21st, hopefully.
Bridget McCormack: That's funny—I remember that. All right, so we all talk about "GPTs" all the time, and it's in the title of a lot of the models that we're using, but what does GPT mean?
Jen Leonard: Yeah. One thing that these tech companies have done no favors for us around is the naming of all these things—terrible marketing. I'm convinced it's on purpose, because they have to know by now that they're not doing anybody any favors in helping us figure this out. So "GPT," of course, comes from "ChatGPT," and it stands for Generative Pre-trained Transformer, which is the foundational description of what this technology is.
Then there's another use of GPT in the plural sense—personalized GPTs—which are customized versions that you can create in ChatGPT tailored for specific use cases, using documents or personas that the GPT will adopt to help you answer particular questions or guide you on particular topics. So if you hear people talking about ChatGPT and then you hear people referring to GPTs (plural), usually if they're using the plural form, they're talking about these customized models or versions of the model. It's also confusing because the other tech companies are developing their own versions. Google now has what they call "Gems," short for Gemini, which is their version of ChatGPT. So you'll see these terms crop up.
These personalized GPTs are really interesting to use. If you have a specific task that you're doing over and over again, or a particular resource that you frequently consult that can be lengthy and hard to get through, you can create a personalized GPT. For example, I have a different GPT for each of my kids' school handbooks, because they're like 100 pages long each and I only ever need to know one small piece of them. I keep those GPTs up on my desktop in the morning as we're rushing to school.
It's sort of like I can ask, "Can you remind me where I'm allowed to pull the car up for drop-off at my son's school, or where we meet the teachers for back-to-school night at my daughter's school?" And it will just search those documents and generate a really quick response for me. I've been trying to use it in sort of low-stakes cases like that. Have you used any GPTs, Bridget?
Bridget McCormack: Yeah, I have. Well, first of all, we have a number of people building them across our internal data at the American Arbitration Association, which is quite useful. And it's sort of a step along the way to building some chatbots, which will be useful for everybody for things like HR information and other internal policies and practices. Right now those feel to me like they reside in lots of different places on our website or in lots of different places on our SharePoint site, and I'm never confident I know which place to pull up. So building all of the information around one topic into a GPT has been one helpful way to interact with that information.
And I've built a couple just for my own use. For example, I have a "CEO coaching GPT" that I built, and it's one that I interact with sometimes when I'm bouncing a question off—basically, my CEO coaching GPT—before I ask my team, in case it's an embarrassing question. I'd rather learn that online first.
Jen Leonard: I love that. I do the same thing too. I named it "Estée," after Estée Lauder, because I tried to think of a female entrepreneur. So I have a coach for female entrepreneurs that I ask a lot of questions to. And I gave it a lot of context around my confidence level as a new business owner—things that might be helpful for somebody who has less experience. So to your earlier point about prompt engineering, you can prompt-engineer within the GPT as you're creating it to customize the responses that it gives you.
There are lots and lots of applications, I'm sure, for lawyers in the future as we really delve more deeply into GPT usage. So those are our Gen AI magical moments since the last episode, and our terms and definitions—prompt engineering and GPTs. Today, for the balance of our conversation, we're really going to talk about how the profession is starting to put shape around guidance for how lawyers should use generative AI.
Main Topic: The American Bar Association’s Stance on Generative AI
Jen Leonard: The American Bar Association issued its first formal opinion on generative AI, and we're also going to talk about some state bar associations' approaches to this. But maybe to level set, Bridget, I know this is a lawyerly audience and many people sort of understand all the different regulatory schemes, but could you talk a little bit about what these opinions really mean, how binding they are, and why we're focused on them in this episode?
Bridget McCormack: Yeah, I wanted to touch on this briefly before we dig into a couple of specific examples, because of course, regulation and sources of law come in many forms. Some states are looking at legislation around this technology—or at least California is. I guess I don't know for sure if any other states are as far along as California. California sent legislation to the governor’s office, I believe—I don't think he has signed it yet. And the White House, of course, issued some guidance last spring, I believe. Congress could also issue its own statute about how generative AI should be regulated. And then eventually, courts will issue substantive opinions when there are legal challenges around the use of technology.
In the meantime, bar associations—state bar associations and the ABA—have the opportunity to issue these opinions on the ethical use and potential risks associated with the technology. A number of states have done this, and they all seem to be converging on identifying the same kinds of areas where lawyers should be particularly attentive. Now, none of these ethics opinions are binding precedent in any way. So, if a lawyer doesn’t follow the advice of her state bar’s recommendation on how to, for example, approach billing with respect to this technology, that doesn’t automatically mean a lawsuit is going to be successful against that lawyer.
But these opinions are often relied on by courts and regulators as authoritative sources of the norms and best practices in the profession. And so they’re pretty useful for any lawyer who’s thinking about how to incorporate the technology into her practice—to at least be familiar with the state bar's statements on the use of the technology, and how it interacts with the state rules of professional conduct, as well as the ABA’s opinion, which it issued—if I remember right—earlier this year. That was ABA Formal Opinion 512 on the use of generative AI in legal practice.
Why don’t we start with 512, just because the ABA is a pretty authoritative voice across the practice and across the states. It makes sense to understand where they came down on the various issues that state bars are now also considering. What’s your sense of Opinion 512?
Bridget McCormack: What’s your sense of the ABA Opinion 512? What do you think it does well? Are there any questions it leaves open? What are your thoughts about it generally?
Jen Leonard: Yeah, so Formal Opinion 512 came out earlier this year. And it’s pretty comprehensive in its approach to thinking about generative AI with respect to the rules that govern lawyers—the norms, what we should be doing. It’s a 15-page opinion, and it covers everything from competence—underscoring that lawyers should understand the capabilities and limitations of any GenAI tools they use.
I think notably, it says that lawyers do not need to become experts in the underlying technology, but they should generally be familiar with what the tools can and cannot do. That’s important. It also talks, of course, about confidentiality. I think that’s the biggest question that comes up—especially for law firms—around safeguarding client information. So, being sure you understand whether you’re using a public model, where you should be putting in no confidential or sensitive information at all. There are going to be some really interesting nuanced questions—ethical questions—in the years ahead about, for example, asking hypothetical questions in a public model for feedback that you then use in client-facing work. But for now, the guidance is: be really careful.
Jen Leonard: There’s also discussion of client communication. So, how do you talk with clients about your use of generative AI? How aware should they be? And in what circumstances should you disclose it? The opinion also emphasizes our duty of candor to the tribunal—clearly influenced by the infamous “ChatGPT lawyer” cases, where fake citations were submitted. It reminds lawyers not to uncritically rely on GenAI-generated content, especially in litigation, and to double-check everything for accuracy before filing anything or citing case law.
Jen Leonard: It reiterates that supervision is still critical. Lawyers remain responsible for the work product that comes out of generative AI. Then there’s some interesting discussion about fee arrangements. For example, what should you consider when billing clients in a world where something that used to take hours might now take a fraction of that time? Or in a world where investing in this tech is cost-prohibitive—what does it look like to pass those costs on?
Jen Leonard: I think overall the ABA did a nice job of reinforcing the core idea that the lawyer-client relationship hasn’t changed. This is just a new tool. And the ABA’s opinion gets out ahead of it to provide some guidance, while making clear that we’re not rewriting the ethical obligations overnight. Most of the commentary I’ve seen has focused on the client communication part—this idea that if you’re using GenAI in a way that’s central to your engagement, especially for strategy or substantive work, the client needs to know. That kind of transparency matters.
The billing and fees discussion also generated a lot of reaction—especially from small and mid-sized firms. Some of them are feeling a bit outmatched right now, since they don’t have the resources to invest in AI the way large global firms do. There’s also a real concern about the business model—if GenAI fundamentally changes how long things take, then the billable hour model might not make sense anymore, and that’s a big shift.
But overall, I thought the opinion struck a good balance between being practical and forward-looking, and it also left space for important future conversations. It didn’t try to over-prescribe anything.
Bridget McCormack: Yeah, I agree with that. I remember in the early days of this technology, we saw some very quick reactions from courts—especially ones that had experienced lawyers using GenAI irresponsibly, like citing fake case names. That one guy in New York really set us all back four years.
But I think this ABA opinion—and many of the state bar opinions we’ll talk about in a minute—are careful, thoughtful reminders that just because there’s a new way of doing your legal work, your ethical obligations haven’t changed. You might have to make sure you understand enough about the technology to be able to carry out those duties that you swore an oath to uphold—but the duties themselves don’t change.
It’s funny. Lawyers often have a quick, risk-averse reaction to new things. And a lot of technology still feels new to many lawyers. I used to give a lot of talks about judges’ use of social media, and I had so many judges tell me, “I’d never do it. I mean, what if something goes viral?” And I’d think, “Well, what are you doing that’s going to go viral?” Because the thing is, the ethical rules for judges are the same whether you’re in a room having lunch with four people or live-streaming a conversation online. So... how about just follow the rules no matter where you’re showing up?
I think it's similar with lawyers and technology. The rules of the game are not different. The ethical ways in which you interact with your clients, your adversaries, the tribunals you appear before, and the public are no different than they were before the technology. You just need to understand enough about the technology to make sure you can carry them out.
I think that’s how I would summarize the ABA opinion—it does a nice job of highlighting the places where lawyers might need to deepen their understanding of technology to meet their existing duties. And if you read through it carefully, I think you'll be in really good shape for thinking about how this looks for your firm.
And the state ethics opinions, in my view, are similarly comprehensive and thoughtful. I’ve been impressed with several of them. One thing I particularly appreciate—both in the ABA opinion and in some state opinions—is the acknowledgement that because the technology is changing so quickly, the opinions themselves may need to be updated regularly. That’s realistic. It reflects the fact that we’re still in the early days of understanding where all this is going.
Main Topic: State Bar Perspectives on AI: Pennsylvania and Minnesota
Bridget McCormack: I thought we could talk briefly about the Pennsylvania Bar Ethics Opinion, as well as Minnesota’s—both the opinion itself and the broader approach Minnesota is taking. Each of those states includes something particularly interesting that I haven’t seen elsewhere. Does that make sense?
Jen Leonard: Please do. And just to add, most states are starting to come out with these thoughts, frameworks, and opinions. We’ve chosen to focus on Pennsylvania and Minnesota not to suggest they’re the only ones doing good work, but because each has taken a slightly different tack that’s worth exploring. So this isn’t meant to be comprehensive—it’s more about highlighting some different examples of experimentation.
Do you want to start with the Pennsylvania one, Bridget?
Bridget McCormack: I think so. So Pennsylvania—I want to say that overall, I think this opinion, like the ABA opinion and like many other state bar opinions, does a nice job walking through the rules of professional conduct as implicated by the technology. So, competency, confidentiality, ethical communication, meritorious claims, candor to the tribunal, the duty to supervise—that's one that we definitely see over and over again—and how to think about fees. I think the opinion does a really nice job.
But it does one thing that I haven’t seen in any other state bar opinion—and maybe there's a listener who can tell me I’ve missed it (and I’d be interested in hearing if I have)—and that's its approach to the unauthorized practice of law. The Pennsylvania rule states that a lawyer shall not practice law in a jurisdiction in violation of the regulation of the legal profession in that jurisdiction—or assist another in doing so.
It then goes on to say—and I’m paraphrasing here—in AI’s development, even in machine learning where AI learns independently, humans initially program the technology, making AI essentially a creation of humans. To the extent that the AI programmer is not a lawyer, the programmer may violate Rule 5.5 regarding the unauthorized practice of law.
Then it says: to avoid UPL, lawyers must ensure that AI does not give legal advice or engage in tasks that require legal judgment or expertise without the involvement of a licensed attorney. There must always be a human element in the legal work product to ensure that lawyers are upholding their ethical obligations.
And I’m sorry to read so many words and sentences on a podcast, but I think this is a pretty important section of the Pennsylvania opinion, and one about which we’re all going to have a lot more conversations—especially as the technology improves. I know we're going to talk in the next episode about the latest model from OpenAI that reasons and shows its work while reasoning. And that, to me, seems to really create issues if, in fact—under one interpretation of this section of the Pennsylvania ethics opinion—that constitutes unauthorized practice of law.
I think this section leaves room for different interpretations. And having a human in the loop is always a good idea when you're practicing law. But what that really ends up looking like from task to task is where I think it's going to get interesting and a little complicated. And I don’t know if you’ve had time to think about what this might mean—or if you’ve had lawyers ask you for advice about it yet.
You live in Pennsylvania. What do we need to do if we're practicing in Pennsylvania and using this technology?
Jen Leonard: I have not had anybody ask me about this part of the opinion. You drew it to my attention early on, and I’ve had conversations with Pennsylvania lawyers—where we’re presenting on a panel to educate Pennsylvania lawyers about it. I am really curious about this provision, especially the idea that if an AI programmer is not a lawyer, then the programmer might be violating the unauthorized practice of law by providing anybody who’s not a lawyer with access to this technology.
Because, as I read that, the programmer would be the big tech company—like an OpenAI or an Anthropic. And I can almost guarantee, unless again a listener tells us otherwise, that a lawyer is not programming that technology. It’s very likely a technologist doing that. So my question is: if I’m a regular person in Pennsylvania, and I pick up ChatGPT and I ask it a question, and there’s no lawyer in the loop, and it gives me something that I rely upon as legal advice—then who is the person responsible for that?
Is that the tech company itself? And again, as we mentioned at the beginning, these are not binding rules, but they certainly inform the way that the state supreme courts will think about their rules. And so it’s an open question to me that I’ve not had anybody ask me about. I’ve been asking other people. And the response I’ve gotten is, “Surely that’s not the intention of the rule. That can’t be the outcome.” But that’s how I read it. I don’t know if you disagree.
Bridget McCormack: Yeah, no—I do. I think that’s what the words on the page mean. It does feel hard for me to believe that’s what it’s going to mean in practice, because I am sure—in fact I’m already aware—of people in Pennsylvania who are not lawyers who have used the technology to get help with legal problems.
I mean, one of my kids was living in Pennsylvania last year, and a friend of his had some trouble with her leaky roof in her apartment. And she was able to get some help and information from ChatGPT and use it to negotiate with her landlord. Was Sam Altman or his engineering team guilty of a felony in that case? I find it very hard to believe.
Even the part of the profession that is most worried about protecting consumers—which is an important role that bar associations play—would think that it makes any sense at all to pursue criminal charges against the engineers at a frontier model company that have built a product that people are going to find their way to use—especially when they don’t have other options. Which was this case. And I’m sure this is one of many, many, many examples of that. So I don’t know. It’ll be interesting to see how it plays out.
Jen Leonard: Yeah, I agree. And I think you’d have two outcomes if that were to be the codification of the rules that govern Pennsylvania lawyers. One is, I think the state Supreme Court would be engaged in a game of whack-a-mole the likes of which we’ve never seen, trying to identify everyone in the Commonwealth using these technologies.
And to your other point—I mean, we’ve seen the leaders of these tech companies thumb their noses at the European Union or entire nation-states. I think probably they’re not that concerned about our different states coming after them for unauthorized practice of law.
So it’ll be interesting to see how Pennsylvania evolves as it codifies some of the rules governing lawyers. And also interesting to see how different it is from the way that some other states are choosing to think about the applications for people across their states using these technologies.
And Minnesota is a great contrast. And I wonder, Bridget, if you’d be willing to walk us through what Minnesota is doing in response to generative AI.
Bridget McCormack: You know, I think—like many others—really thorough and thoughtful, and a great place for any Minnesota lawyer that's trying to get their arms around what the potential places are where they need to be extra careful in using the technology given their ethical obligations to their clients and the public: competence, candor, confidentiality, fees.
There's one piece of the Minnesota ethics opinion on fees that I found really, really interesting. I think many lawyers are focused on the cost of using the technology and whether there is a way to pass that on when their clients are getting the benefit of it. That seems like a fair set of questions.
Minnesota reminds lawyers about the other side of that—that if lawyers could be using the technology to do certain tasks significantly more efficiently, they probably have an ethical obligation to do it. The quote comes from another ethics opinion, but it's one of my favorites: “Lawyers must remember that they may not charge clients for time necessitated by their own inexperience.”
So yes, learn what you need to learn to use the technology to help your clients, because otherwise, if you're charging them to do tasks manually that you could be doing far more efficiently, they may have a reason to complain about your service. That's kind of an interesting aspect of the fees discussion going forward for lawyers.
I do think, again, that makes it a little more complicated for small and medium firms that just have a harder time even finding the time to learn the technology and how it will benefit their clients. I don't know—do you have thoughts about that for them in particular?
Jen Leonard: I agree. I mean, I think in the long term that generative AI is a game changer for small and solos and plaintiff-side practices—outside of the contingency plaintiff-side practices. But I think in the short term, this is what I hear most frequently from midsize firms, small firms, solo practices. It's like, A, they don't have the time, because at any level of your practice in any one of those models, you just don't have the time to build in these things—the learning process.
But also, the investment you would need to make in having somebody come in and teach you how to use this, or in acquiring the tools themselves—I think that they're really grappling in the short term. And I do get a little bit concerned that the big global firms have, you know, googabs of money and resources to throw at these things and figure it out more quickly. And I am worried that we're not being supportive enough of everybody else who serves a lot of different parts of the legal marketplace. And these opinions, I think, have generated the most discussion in those areas of the profession.
Bridget McCormack: Yeah. I mean, if anything, the opinions have been useful to get lawyers to focus on “What does this mean for all of us, right? Like, what does this mean for our firm? What does this mean for our practice? What does it mean for our clients?”
I think—even if the opinions are outdated six months from now because the technology has already moved so quickly that there are new considerations—they’ve been useful in starting these conversations. So I’m grateful to the bar associations who have taken the dive to get these done.
Minnesota has one other piece of their work, though, that’s worth mentioning—that you and I have talked about. And it may be because Damien Riehl, I think, chaired the committee. And Damien, of course, is at vLex and a longtime legal tech evangelist, I think, and a real innovator.
But the state bar greenlighted a new committee that's going to stand up a generative AI regulatory sandbox tasked with improving access to justice. In other words, the committee sees this technology as potentially one that might provide some really novel, breakthrough—perhaps—solutions for the access to justice crisis in Minnesota, which is the same crisis in every other state in the United States. Most people can't afford legal help with their civil justice problems. And so they either just try and manage on their own, or they give up.
But the Minnesota Bar committee obviously recognized that there could be some issues with the regulatory backdrop—and UPL in particular. So the sandbox is a way to leverage the technology to help people, and give the technologists, lawyers, others who want to explore these options a safe place to do it, where the bar can also monitor the use of the technology in the sandbox to make sure that consumers are not harmed.
There are other states that have launched regulatory sandboxes more generally for tech products and other forms of business structures to address the needs of the public. But this is the first one I’m aware of that’s focusing its sandbox effort on generative AI. And I think it signals a real positivity from the Minnesota State Bar about what might come of it.
And that seems kind of exciting—that the bar itself views some real potential for improved service to the public it serves. I don’t know if you’ve heard anyone talk about that in your work, Jen, or have any thoughts about it yourself?
Jen Leonard: I have not been asked about the Minnesota approach specifically—probably because I’m talking mostly with law firm partners and the sandbox piece is really aimed at the access to justice crisis, as you said.
I’ve seen a lot of commentary on LinkedIn about it. And I got excited about it because this is exactly what you and I were hoping would be the future of the technology—trying to figure out, you know... I just think of our mutual friend and colleague Jim Sandman, who has always said, “We're fighting a five-alarm fire with a bucket of water and a ladder.” And this is the opportunity to really move beyond that.
So I was so heartened to see Minnesota getting ahead of it and starting to think early about it. I’m also noticing the reference to immigration here—even in the private sector context. When we're talking about these small firms and solo shops, I’ve heard from more general service (slightly larger) firms that they’re learning from their immigration partners. Because immigration is a service that lawyers provide that has been a flat-fee service for a very long time.
And so those lawyers are really well positioned to provide guidance to the rest of the profession, I think, as we think through the way that we charge for our services. But I was just really excited to see this. And it again seems a little bit more realistic in some ways than trying to fight what would be an avalanche of UPL complaints if we’re not thinking differently and more innovatively—both in how we realistically live alongside this technology, and also use it for good.
And also, you know, just the pace at which these opinions are coming out—to me, you sort of referenced this earlier in the conversation, but to put a really fine point on it, I mean, I think they’re not binding yet, there aren’t rules codified, there’s very little case law. But I think the signal that all of these opinions are sending is: this technology’s here, and we think that it will seriously change the way that lawyers practice.
And I think that that helps overcome a lot of the early skepticism among the entire legal profession—to have these bodies weighing in on it. I also note that I was doing a presentation and prepping for it, and looked up some past opinions from the ABA. And the ABA just issued in 2022 an opinion about reply-all usage, and one earlier this year about listserv usage—so technologies that have been with us for decades—and we’re still getting some early guidance on those.
And so the pace of these opinions, to me, signals that this is a very different type of technology, and the profession's taking it seriously, and I think that that's a good thing.
Bridget McCormack: But if the technology can allow us to never have to reply all or email again, I'd be really happy.
Jen Leonard: If generative AI could replace email in some way and somehow save us from like the back and forth about scheduling prep calls for panel presentations, it would be well worth all the upheaval that it’s created.
Bridget McCormack: I mean, amen. If generative AI can solve the back-and-forth email on practicing for panels, that’s the best use case.
Jen Leonard: Well, we're just about out of time, but I think this is a really, really interesting conversation for lawyers who haven't really had the chance to pay attention to some of these emerging opinions. And not binding, like we said, but really a thoughtful way to begin strategizing around your own practice's response—your own firm's response. So I'm interested to see where this heads in the days ahead of us. Any closing thoughts, Bridget?
Bridget McCormack: No, but I’ll keep my eye out—or we will keep our eyes out—for additional state bar offerings on the topic. And we'll, you know, revisit this anytime we see something interesting.
Jen Leonard: Well, thank you to everybody for spending some time with us today. This is a rapidly changing landscape, so we'll definitely keep you posted on any emerging topics. We'll be back next time to talk about some of the emerging capabilities of the models themselves. And in the meantime, we wish you all the best in your practices and in getting up to speed with the exciting possibilities that generative AI creates.