They also share practical applications of AI, including Claude-powered governance tracking and financial analysis, and reflect on key takeaways from Legalweek 2026. This year’s conference highlighted a shift from experimentation to ROI pressure, rapid enterprise adoption, and growing urgency around AI governance.
Key Takeaways
- AI Is Testing Legal Boundaries: The OpenAI lawsuit raises novel questions about unauthorized practice of law and whether existing statutes apply to AI systems.
- Governance Cannot Wait: Legal leaders increasingly agree that the risks of inaction outweigh the risks of imperfect governance frameworks.
- ROI Pressure Is Rising: Legalweek 2026 signals a shift from experimentation to demands for measurable value—though clear ROI remains elusive.
- Enterprise Adoption Is Accelerating: Corporate legal departments are rapidly adopting generative AI, though usage and impact vary widely.
- AI Is Becoming Operational Infrastructure: From governance trackers to financial analysis, tools like Claude are embedding directly into legal and business workflows.
Final Thoughts
AI is no longer a future issue for the legal profession—it is actively reshaping how legal work is performed, governed, and regulated. The decisions made now—by courts, companies, and lawyers—will define the boundaries of legal practice in the AI era.
Transcript
Jen Leonard: Welcome back, everyone, to AI and the Future of Law, the podcast where we talk all about artificial intelligence and its implications and applications in the legal field. I’m your co-host, Jen Leonard, founder of Creative Lawyers, here in person at Video City in Philadelphia with the wonderful Bridget McCormack, president and CEO of the American Arbitration Association. Hi, Bridget.
Bridget McCormack: Hi, Jen. It’s so good to be in person. It’s so much fun when we get to do this live.
Jen Leonard: I love it here, and it’s great to see you. I know you’re very, very busy, so I appreciate you.
Bridget McCormack: Everybody’s busy.
Jen Leonard: Some more than others. But we’ll move right into our episode for today. As with every episode, we’ll first share our AI Aha!s—what we’ve been doing with AI recently that we think is particularly interesting. In our What Just Happened segment, we’ll talk about the new lawsuit against OpenAI alleging unauthorized practice of law, among other claims, in a really interesting set of circumstances. And then, for our main topic, we’ll do some themes and takeaways from Legalweek 2026 and talk a little bit about the panel you moderated there.
So let’s dive in, Bridget. What are you using AI for these days, aside from everything?
AI Aha!: Building Trackers and Business Intelligence with Claude
Bridget McCormack: Aside from everything, I’ll tell you the one I’m kind of most excited about. We’ll talk later in the episode about AI governance, because that was the topic of my Legalweek panel. And as you know, the AAA is doing this large survey of 500 C-suite leaders about where they are on AI governance. So I’m tracking information about it for our internal purposes—how what we’re doing measures up to what others are doing—for the survey, because we’re going to produce some thought leadership around that, and honestly for you and me, just for our discussions.
Back in the way-back times of 2023—I think I was trying to use ChatGPT to tell me when there were updates on AI and ADR, AI regulation, and related legal developments so I could keep abreast of changes in the landscape. A lot has changed since then—both in what different regulatory actors are doing and in what the technology can do for tracking.
So after our last discussions a couple weeks ago, I switched to Claude Pro because I needed to get in on Co-work. It’s amazing. It’s so good.
I told it: I want to build a tracker to keep track of what we’re seeing in the AI governance space, as well as any new regulatory activity, and I want that translated into the different lanes where it might matter to me. And Claude built the most beautiful thing. It’s this gorgeous tracker where I can upload new materials I’m working on or producing, but it also updates on its own.
It does regular sweeps across a number of categories: court decisions, statutes and regulation—I’m just doing domestic for now, though it told me it could do international too, and I was like, enough already, Claude, slow your roll; my brain is tired—enterprise adoption, and agentic commerce, because that’s a topic I’m thinking about a lot and I’m interested in what governance looks like there.
It organizes all of that beautifully, and then it translates the updates into the specific things I told it I care about: the podcast, the AAA’s governance work, my governance survey, and anything specific to agentic commerce. So it’s pulling things directly about agentic commerce, but it’s also thinking about whether new regulation affects it. It’s amazing. I could imagine having a few research assistants working on that full time.
Jen Leonard: That’s incredible. I’m going to set that up too. I was literally just thinking about this yesterday. What does it look like? Do you have to log in and go find it?
Bridget McCormack: Right now, it mostly lives inside my Claude account in a couple of places. But I was just thinking on the train down that I want to have it send me an email—maybe Friday mornings or Monday mornings—with the updates and what I should look at when I tune in, because I don’t want to have to go in there and find it every time. I want it to tell me when it’s time to take a quick look. I’m sure it will do that. I just haven’t asked it yet.
Jen Leonard: That feels like the unlock. We work in Google Workspace, and now it has those daily summaries of everything. They’re not perfect, but I kind of like glancing at them—especially when it tells me there’s nothing urgent to worry about. I can definitely see this kind of reporting becoming part of the interface.
Bridget McCormack: I’m sure it will. How about you? What’s your latest AI Aha!?
Jen Leonard: I’ve also been using Co-work a lot, but for business analysis. I’ve been using it to do financial analysis and recommendations for our company, and it’s great because now we have three full years of information. Before, it was harder because it was just: we’re getting started, I don’t know, what should we do?
But I uploaded our books for the last two years plus the year-to-date information, and I asked it to give me a sense of strong-revenue periods versus weak-revenue periods. Then, because I’m not a financial analyst and I’m learning on the fly in this area, I asked: what else should I be asking you to do that I’m not even thinking of?
It generated this whole report—average monthly revenue, actual bottom-line profit compared to top-line revenue, a heat map across three years showing our average for each month.
Weirdly, it showed that February has consistently been our busiest month, which I would not have guessed. I think it’s because everybody is coming off vacation. So it’s a lot of new contracts and things like that. We already knew summer is always slow, and the heat map showed that too. It even started making recommendations, like: if summer associate programs are a big area for you, start marketing those earlier so you’re lining up work for the summer.
It’s similar to legal, and similar to the topic we’re going to talk about later, in the sense that I don’t otherwise have access to that kind of analysis. It would take money and time to find the right person to do that for a small business.
Bridget McCormack: Even if you outsourced it, it would still take time and money.
Jen Leonard: Exactly. So I’ve been using it a ton for that. And like you, I find Co-work incredible. I’ll set it to do five different things at once, go take the dog for a walk, and come back to something that’s 90 percent of the way there.
Bridget McCormack: It really is amazing. But I’m also finding that my brain can’t fully keep up with all of it. I can have more than one thing running, but then I still have my regular job, and I need more hours in the day—not just to manage the setups, because the setups themselves now matter, but also to absorb the actual substance. It doesn’t help much if it produces something really interesting and I never use it or even process it. So I need it to solve some time problems for my brain too.
Jen Leonard: Same. And I don’t know when we get to the point where I can say, this is what my Co-work does, and a year from now that’s still what it does. The setup itself takes time, and then there’s a new feature, or a new tool, or a new workflow, and suddenly you’re off in another direction.
What Just Happened: Nippon Life v. OpenAI
Jen Leonard: So, What Just Happened? Something really interesting just happened. I don’t know if you’re familiar with the case of Nippon Life Insurance Company of America v. OpenAI.
This is a really interesting case—possibly one of first impression—where OpenAI has been sued by a life insurance company, Nippon Life. And before I even get into the details, the company is seeking $300,000 in compensatory damages and $10 million in punitive damages.
Here’s what happened. There was a disability claimant, Graciela Dela Torre, who had settled her benefits dispute with Nippon Life in 2024, using a lawyer. The case was dismissed with prejudice, so it was done and could not be reopened.
After the settlement, she apparently became dissatisfied with her legal representation. She uploaded a large number of emails with her lawyer, along with the underlying documents, into ChatGPT. She asked ChatGPT whether her lawyer had been gaslighting her by pushing her toward settlement and whether she should have fought harder.
ChatGPT validated her suspicions. It said her lawyer had been rushing her, that he was not being fully transparent, questioned the attorney’s conduct, and encouraged her to fire him.
Then it helped her try to reopen the settled case on the ground that she had been misled by her lawyer and was entitled to further damages. That effort failed because the matter had already been dismissed with prejudice. But when that happened, the AI drafted an entirely new lawsuit against Nippon Life and then created 44 subsequent motions, subpoenas, and filings in that matter. Many of those filings, the court found, served no legitimate purpose in the proceedings.
Nippon says it spent about $300,000 defending against all of that, which is the compensatory-damages piece of the complaint.
One especially interesting detail is that somewhere in all of the documents ChatGPT produced, there was a hallucinated case cite—the phrase that strikes fear in the heart of lawyers. The AI had relied on that nonexistent authority, and Nippon’s lawyers discovered it.
So the three claims in the case are tortious interference with contract, abuse of process, and—most interesting for our purposes today—unauthorized practice of law under Illinois law. The theory is that ChatGPT was essentially providing personalized legal advice and drafting litigation documents without a license.
There have already been some interesting conversations about this. It may be the first major civil action alleging that an AI chatbot engaged in UPL. Stanford CodeX has argued that the better framing may be product liability—that OpenAI built a system in a way that allowed users to cross the threshold from information to advice. And New York legislators are also advancing a bill that would expressly make AI companies liable when their chatbots pose as licensed professionals.
Another interesting wrinkle is that, after all of this happened, OpenAI updated its terms of service in October 2025 to say users should not rely on ChatGPT for legal advice. But when others tested the platform after that update, it was still providing legal advice and drafting documents. So OpenAI seems to be trying to shift responsibility to the user through the disclaimer rather than actually preventing the conduct.
Bridget McCormack: By way of the disclaimer.
Jen Leonard: Exactly. And Nippon’s lawyers are now making the argument that the updated terms of service show OpenAI knew it was engaged in unauthorized practice of law.
Bridget McCormack: It definitely has evidentiary value in the lawsuit. It’s super interesting. A lot of people are obviously talking about it.
What I hadn’t quite connected until you were talking is the relationship between the proposed New York legislation and New York’s UPL statute. Every state has one, but they’re all different.
Some make it a misdemeanor, some a felony—It’s criminalized in a lot of places. But it makes me wonder whether the New York legislation is itself a signal that the legislature doesn’t think UPL statutes naturally extend to chatbots—If they felt they needed an explicit statute saying chatbots can’t practice law.
Maybe it’s an acknowledgment that those statutes were written for people—because of course they were drafted long before anyone imagined a chatbot that could read a fact pattern and propose a legal course of action.
That may end up being the weakness in Nippon’s UPL claim. There’s no person actually practicing law here. So can you extend a UPL statute to a technology interface? That feels like a significant stretch to me.
That said, if they succeed, the advantage of the UPL theory for Nippon is that every state has some version of this kind of statute, and they’re also asking for declaratory and injunctive relief. The tort theories—and even a product-liability theory, which others say may have been a better fit—don’t usually get you there as easily. Tort law mostly compensates harm after the fact. UPL statutes are often specifically designed to support injunctive relief, and I assume declaratory relief as well.
So if you’re Nippon and you’re looking ahead and imagining a wave of claims generated by pro se litigants using chatbots, and you want to get in front of that, then injunctive and declaratory relief are a much more useful vehicle than just winning damages in one case. Otherwise you’re proving harm case by case.
Jen Leonard: That does make sense. I was curious why Nippon wouldn’t also plead a b product-liability claim.
Bridget McCormack: I don’t know why they didn’t just throw it in too. You can absolutely plead more than one theory on the same facts. Maybe they thought tortious interference would be easier to prove. I’m not sure.
Jen Leonard: In a true-life AI confession, I learned a lot of this by using NotebookLM. I uploaded all the documents, created a podcast, and listened to it, which was really helpful. And the fake podcast host raised the question whether Nippon may be the canary in the coal mine for corporate counsel offices if this kind of thing becomes widespread over the next year or two.
The podcast made me wonder whether we’ll eventually need two different versions of UPL regulation—one aimed at the major tech labs and one aimed at human actors—because I’m not sure how we play whack-a-mole with millions of people using chatbots to file claims, unless I’m underthinking it.
Bridget McCormack: First of all, it’s extremely meta that we’re now talking about the podcast made by the documents. But tell me what the fake podcast hosts actually thought, because here’s what I’m confused about: if you’re an individual who can’t afford a lawyer, and you’re sitting in your Claude or ChatGPT account trying to understand your legal rights and responsibilities, and then you act on that, you’re not guilty of UPL.
Jen Leonard: Right, no.
Bridget McCormack: Okay, because if the Google podcast host thought that, then I’m going to have to report them for some kind of unlicensed podcasting.
Jen Leonard: I would love for that to be the outcome of this conversation. But I think their real point was that it’s fundamentally unfair for the tech companies to bury responsibility in the terms of service and push the burden onto individual users.
Bridget McCormack: So the disclaimer is doing the work of shifting responsibility to the human user.
Jen Leonard: Exactly. And their view was that maybe there should be a stronger regulatory framework around the tech companies themselves, both because they have the power and because there are only a handful of major model developers. The argument was: there should be more clarity around what these systems are allowed to do and not do. And they were making that argument in light of OpenAI updating its terms after the fact. If the companies are going to keep trying to skirt responsibility by burying limitations in terms nobody reads or understands, that doesn’t seem very good for the public.
Bridget McCormack: That makes sense. I’m not sure you can sensibly wall off only the frontier-model companies, though. As you saw at Legalweek, there are approximately four gazillion legal AI startups building on top of those frontier models. So why wouldn’t they also bear legal responsibility if they’re producing tools people rely on? Maybe the fake podcast hosts would say exactly that: if you’re shipping a product people are going to rely on, then you either stand behind it or you don’t.
Jen Leonard: One thing I also wondered about is the New York legislation. In some states, the legislature is more directly involved in regulating practice of law, but in a lot of states it’s really the state supreme court. So there’s this interesting branch collision too. To your point, if the legislature is advancing a statute like this, maybe it’s implicitly conceding something, and maybe not every supreme court is going to be thrilled that the legislature is stepping in to define the boundary.
Bridget McCormack: I do think we’re going to see interesting differences across states, because there are some where the courts take a much more muscular role in regulating the profession, and others where they’re perfectly happy to let the legislature define the limits of UPL because it’s sticky and difficult and fine, somebody else can do it.
But one of my biggest takeaways so far—and we don’t even know what’s going to happen in this case—is that this is a perfect example of why regulation by court doctrine is really not optimal. We only get court opinions when institutional actors with resources can afford to bring cases. So if that becomes your regulatory mechanism, you’re getting a skewed sample of disputes. There’s a reason another branch of government is elected by the public and is designed to actually deliberate over what regulation should look like.
Courts answer the question in front of them. And the question only gets in front of them if someone with enough resources brings it there. So regulation by court decision is a terrible way to govern something like this. I hope we can get our act together and figure out what future we actually want and how to build toward it.
And this is another exciting area for lawyers to get involved in. There’s a big, open lane here. Lawyers are needed to help figure out what the regulation should be.
Jen Leonard: That’s such a good point. And I’m still kind of stunned by how many lawyers I meet who aren’t educated about AI and aren’t using it regularly. To your point, that’s also an abdication of their ability to shape the future they want to see.
Legalweek 2026: Scale, ROI Pressure, and a More Modern Feel
Bridget McCormack: So let’s move to our main topic, which is Legalweek.
Legalweek 2026 was held March 9 through 12. You and I have now been the last three years, I think. It’s the biggest legal tech conference in the world. For a long time it was held at the Midtown Hilton, and it kind of outgrew that space, so this time it was at the Javits Center. Physically, I thought that was a huge improvement. I got lost at the Hilton constantly. There were half-floors and secret doors. I was never in the right place.
Jen Leonard: It was actually the first event I went to post-COVID, and I almost immediately had to go up to my room and lie down because it was just too much stimulation. And meanwhile people were pitching me e-discovery products left and right. I was like, I absolutely cannot buy your product.
I did have one sweet young sales rep on an elevator while I was trying to flee who said, “This is my first time out. Could I practice my pitch on you and get feedback?” And of course I said yes.
Bridget McCormack: But the exhibit floor this year was significantly better because there was just room for everyone. The ceilings were high. You weren’t constantly running into people. It wasn’t as hot or crowded. It felt modern.
Jen Leonard: One thing I found fascinating was Harvey’s presence. Those giant ads with chief innovation officers from major firms effectively endorsing Harvey felt like athlete endorsements. It was like legal-tech NIL. And regardless of how the competitive landscape shakes out, it underscored that there are new entrants here who think very differently from the incumbents and who are borrowing presentation tactics from other industries.
Bridget McCormack: That’s a really good point. It is a very different way of presenting to users.
Jen Leonard: And I think it’s effective, because they’re literally putting faces to the product. You’ve always had quotes and testimonials, but walking into the Javits Center and seeing chief innovation officers from major firms blown up on banners next to the brand felt new.
Bridget McCormack: Especially at that scale, and especially from firms that size.
Overall, I thought it was a really interesting conference. It’s always useful to get a temperature check across the industry, and there were definitely more vendors than ever. You could also really see growth among the larger players. Clio, for example, seemed to be everywhere, and Harvey’s presence was impossible to miss.
One of the takeaways that stuck with me came from Baker McKenzie’s Danielle Benecke, who described 2023 as the year of discovery, 2024 as experimentation, 2025 as the year we started to see real deployment in legal workflows, and 2026 as the year value is under the microscope. That feels directionally right to me, though I think some law firms were still in experimentation even into 2025. But the broader point was that this is the year organizations want to see a return on their investment in the technology.
I’m not even sure that’s the smartest frame. I don’t know that Danielle was endorsing it as much as reporting it. Nobody asks for the ROI of electricity. We don’t say: unless I can see the exact dollar figure at the bottom of my spreadsheet, I guess we’re going back to candles.
I think it’s a mistake to think about this too narrowly in immediate ROI terms. But maybe that’s where the market is. I was also struck by the stati stic from Relativity showing generative AI adoption among corporate legal departments jumping from 44 percent to 87 percent in a single year. That surprised me. I’m not surprised that in-house teams are leading, because they have been for a while, but that number is enormous. I had to update a deck later that same day because of it.
Jen Leonard: I shared your reaction to Danielle’s sequencing, but when my partner Marielle came with me for the first time and we tried to recap the conference, what I kept coming back to was this: the theme was definitely ROI, but I didn’t leave with any clear answers to the ROI question.
There’s urgency in law firms right now—this sense of, okay, we’ve been doing this for 18 months, where is the transformation? But I didn’t come away with any clarity that any firm has a clean, compelling picture of ROI yet. And honestly, that makes sense to me. This is a massive change-management project across the whole relationship between firms and corporate legal departments.
Even getting people to use AI more consistently is still a challenge. So yes, firms want to talk about infrastructure and ROI because it makes them feel like they’re moving toward maturity. But I don’t think it’s really possible to demonstrate that cleanly yet.
And you’re probably not going to see revenue gains yet. A lot of firms are really hoping not to see revenue losses. Pricing conversations are definitely picking up from what I’m seeing. But it reminds me of the old conversations we had during the Great Recession around AFAs—who’s going to bear the risk of more litigation, less litigation, changing scopes, all of it. Which is hilarious, because if any profession should be able to hash out risk allocation, it should be lawyers. You bill in six-minute increments. You have the most detailed time records in the world. And yet people still seem unable or unwilling to do it, probably because the profession is so risk-averse.
I was also stunned by that corporate legal department adoption number, though I did not dig into the methodology. I’d want to know what size departments were surveyed and what they mean by “adoption.” Are we talking about actual usage? Or are we talking about licenses purchased and made available? Because we talk to a lot of corporate counsel, especially at smaller organizations, who have done very little for understandable reasons.
Bridget McCormack: Cost center versus revenue center matters there too.
Legalweek 2026: Bridget’s AI Governance Panel
Jen Leonard: The other big theme this year was the shift from chatbots to agentic systems, which is happening whether people fully realize it or not. Legalweek may amplify that because so many people there are specifically working on AI strategy, but it’s clearly where the conversation is going.
You moderated a really interesting panel on governance. I found it fascinating and kind of mind-bending. Maybe you could talk a little bit about the panel, who the panelists were, and what you covered.
Bridget McCormack: I was really pleased with the panel. I had Anna R. Gressel from Freshfields, who’s a partner and global co-head of AI and spends a lot of her time advising companies and boards on AI governance. She’s incredibly smart and thoughtful.
I also had Galia Amram, who is associate general counsel at OpenAI and focuses on safety in her role. I thought it was really valuable to have someone from a frontier-model company there, because so many organizations are trying to govern the things they’re building on top of those systems. It matters how the underlying model companies think about governance, and frankly how good their governance actually is, because people are counting on it.
And then I had Henry Hagen from Moderna, who you introduced me to, and he was fantastic. Moderna has been early to enterprise adoption of OpenAI’s products across the business, but they started with the legal team. They explicitly told legal: you’re going to go all in first. In retrospect, that strikes me as a very smart move, because otherwise legal can end up playing the slow-everyone-down role. Sometimes that’s necessary—that’s what lawyers are there for—but if you know a major technological shift is coming, you want the legal team to understand the trap doors and the safe lanes first. If legal figures that out early, it allows the rest of the business to move faster.
So it was a great group, and it made for a really interesting conversation because they each occupy different roles in the legal system and see governance from different angles.
One thing I didn’t fully think about at the outset was that all three of them are operating directly on frontier models. I didn’t have anyone there representing a layer like Harvey or Legora. OpenAI is OpenAI, of course. Moderna is building with OpenAI’s tools. Freshfields is working directly on frontier-model infrastructure as well. That was interesting, because a lot of legal buyers may actually prefer to outsource part of their governance burden to a company like Harvey or Legora. If you buy through that layer, you may feel like someone else is doing at least part of the governance work for you.
So the conversation ended up being a mix of the practical—how do you get from zero to one?—and the strategic—what changes in an agentic future, and how do you make governance something actionable rather than just a document that sounds good?
For me, the single biggest takeaway was how clearly everyone on the panel said this: yes, governance is hard; yes, it’s tricky; yes, it’s imperfect everywhere. But the bigger risk is doing nothing. There was real clarity from all of them that, when you weigh the risks of moving against the risks of waiting, it’s not even close.
Jen Leonard: That’s really interesting. And beyond learning, adjusting, and refining your governance structure over time, just the act of paying close attention to the systems you’re adopting feels so institutionally valuable. Even if the framework is imperfect, and even if the systems are moving faster than governance can keep up, you’re still building institutional memory. You’re learning what happened when you tried to govern in one way versus another. You’re building the muscle.
The organizations that say it’s too hard and decide to wait are missing that entire learning loop.
Bridget McCormack: Exactly. One of the themes on the panel was that waiting to be a fast follower isn’t really an option here. You can’t just sit it out. You have to get in the game.
Jen Leonard: It was a super interesting panel. I left feeling very informed and, in a good way, still uncertain—much like the panelists themselves—about what a truly strong governance regime ultimately looks like.
Our team is also really excited because we’re working with your team and Practising Law Institute to interview corporate counsel and create a series of educational offerings to help legal leaders actually get their arms around this.
Bridget McCormack: I’m very excited about that project because I don’t think there’s anything else out there quite like it, and people are hungry for it. We get a lot of calls from lawyers who basically want us to act as informal advisors while they’re trying to build governance frameworks. So I think the series is going to be really valuable.
Jen Leonard: This is something we’ve been telling law firms for a while now: this is a real source of revenue and service. There’s so much demand for this kind of guidance. There are all these open lanes, like you said, and firms could be extremely helpful to their clients here.
Bridget McCormack: Whenever you’re crossing a brand-new ocean, you’d rather not do it alone. It helps to have other people in the boat. Lawyers have a lot of oars to pick up right now.
Jen Leonard: Definitely. Well, that was a really great conversation and recap of Legalweek, Bridget. I was so glad to see you there, and as always, you put together an amazing discussion.
Bridget McCormack: I just facilitated it. I got lucky with good panelists.
Jen Leonard: Well, I think that’s a wrap on our Legalweek recap. This was a great episode: ChatGPT and UPL, and Legalweek trends—especially around corporate governance. Thank you so much for walking us through it.
Bridget McCormack: Yeah, it was fun.
Jen Leonard: And thanks to everybody out there for joining us for this episode of AI and the Future of Law. We look forward to seeing you next time.