Predictions for 2026 in AI and the Law

What will AI actually change in law in 2026 — and how should courts, firms, and legal institutions prepare? In this season two finale of AI and the Future of Law, hosts Jen Leonard and Bridget McCormack unpack what Google’s Gemini 3 and other agentic AI systems might mean for legal work. 

They imagine how private-equity-backed MSOs and ALSPs could drive a new “platformization” of legal services and explore emerging ideas like “AI legal twins” of superstar lawyers, AI co-mediators, and experimental court pilots for low-stakes disputes. Throughout, they focus on what is realistic, what is already underway, and what leaders across the justice system should be planning for now.

Key Takeaways

Platformization of Legal Services: AI plus private equity, MSOs, and ALSPs could create integrated legal platforms that smaller firms plug into for tools, workflows, and analytics.
AI and Talent Strategy: Firms will need new approaches to hiring, training, and leadership roles focused on AI competence and change management.

AI in ADR and Courts: AI co-mediators and opt-in court pilots could help manage high-volume, low-stakes cases while preserving human oversight.
Creativity as an Edge: The next phase will reward lawyers who experiment with AI, design new workflows, and rethink traditional career paths.

Final Thoughts

This episode suggests that AI in law is moving beyond document summaries and email polishing toward agentic systems, new business models, and reimagined dispute resolution. The profession faces a choice: treat AI as a peripheral tool, or as a partner in redesigning how legal services are delivered.

For lawyers, judges, and educators, 2026 is less a distant horizon and more a fast-approaching checkpoint. The actions taken now—around talent, platforms, and experimentation—will shape who thrives in the next era of legal work.

Transcript

Jen Leonard: Hi everyone and welcome back to the AI and the Future of Law podcast. I'm your co-host, Jen Leonard, founder of Creative Lawyers, here as always with the wonderful Bridget McCormack, president and CEO of the American Arbitration Association. Hi, Bridget, it's wonderful to see you.

Bridget McCormack: Hi, Jen, great to see you too. Happy Thanksgiving, happy holidays, and all the rest.

Jen Leonard: This is actually our season two finale. We’ve had a great season with wonderful guests and lots to talk about, and we will be kicking off season three in January with even more fantastic guests—and, I have no doubt, even more fascinating topics to explore together about what AI means for the law and for broader society.

So let's kick off our episode today with our three segments that we always explore together: our AI Aha!’s—the things we've been using AI for in our personal or professional lives since last we spoke; our What Just Happened segment, where we connect the dots for a legal audience with what's happening in the broader tech landscape; and our main topic.

Today's a fun one. I'm excited about our main topic, which is our predictions for 2026 in the tech landscape as it relates to law.

So, Bridget, will you kick us off with your AI Aha!?

AI Aha! Moments

Bridget McCormack: I will. And today's is sort of a return to non-work use cases.
Over the Thanksgiving holiday, I had my husband's family over the night before Thanksgiving to celebrate his 60th birthday, which meant I had to make some food.

Making food for big groups of people is a thing that stresses me out. I follow recipes; I don't really know how to make things on my own, out of my brain. And even shopping for the things in the recipe stresses me out.

Normally I would have a list and have my phone in case I needed to refer to it. But this time, instead, I just fed the recipes I was building up into my two chatbots that I use voice mode with a lot—Gemini and ChatGPT for me right now—and had them, on the fly, helping me with substitutions when a particular pepper—whose name I couldn't even pronounce—wasn't something that was available. They were also doing the math on tripling the recipe and translating cups to pints to whatever—all of which I know you can do on your phone, but you'd have to type it in and it might take a while and be a pain in the butt.

Instead, I was just feeding in the recipes and saying, “This is what I'm trying to do. When I go through the grocery store and ask you questions to help me get what I need, that's what I need from you.” And it was just so much more convenient than the old way that I managed through before.

It was not the first time I've made food for a bunch of people, but it was just a much easier user experience. Probably something many people have done, but a reminder of some of the things that have just gotten easier in life as a result of—especially for me—voice mode with these tools.

Jen Leonard: Well, happy 60th birthday, Steve. One of these days I need to meet Steve and tell him personally.

Bridget McCormack: You really do. He feels like he knows you, but yeah, you're going to have to actually make that happen.

How about you? What's your Aha!?

Jen Leonard: So, something that stresses me out is the cold. I hate the cold more than anything in the world.

I'm in Philly, which tends to be cold in the winter, and people are always like, “Where are you from originally?” I’m like, “I'm from here. I've hated it my whole life and I still hate it.”
Our family was invited to go on a ski trip with wonderful friends and four families from our neighborhood. And I will take being miserable and cold and being with friends over being warm and having FOMO. So we are going.

We sort of know how to ski, but we're not avid skiers because of the aforementioned hatred of the cold. So I went into Gemini 3 and I told it all of our ages and my hatred of the cold and asked for a shopping list for Cyber Monday of, “What are the things that I need to make sure that I get to make sure that we're all warm and not miserable?”
It generated a list for each of us based on our ages—and especially for me—and it was cute. It was sort of like, “If you're going to splurge on one thing, get these battery-operated heated socks,” which I did order. 

And then for my nine-year-old daughter, it recommended that I get ski pants with a flap so that I don't have to take everything off of her every time we get off the mountain if she has to use the bathroom, which I thought was a really good tip.
So it was very, very helpful. I just went through the list—base layer, mid-layer, outer layer—and all the things. It said, “Don't double sock because it'll cut off the circulation in your legs,” which is a mistake I might make. So it was very helpful for me, and it took like five minutes to get the whole list together.

Bridget McCormack: Did you have it suggest or give you links for buying any of the things that you didn't already have, or did it not take that step?

Jen Leonard: I did not ask it to, but I did ask it to recommend certain brands for certain parts of the get-up. For the base layer. I wanted to make sure the base layer was really warm.
So I asked it for more detail, and it said 250-weight merino wool is the one to go with if you really want to be warm.

And then I asked it for any above-and-beyond things—accessories—that really, truly miserable people might want, and it recommended these things called “Bootaclavas”. They're boot warmers. I did not splurge on those, but I thought that was cute. I found them online and I was like, “That's a bridge too far.”

Bridget McCormack: I did a similar thing when I was preparing for that hiking trip in Italy. I wanted to understand rain pants. My friend that we were going with, who's done this before, was like, “You need rain pants, you need this, that…” I was like, “What even is ‘rain pants’? What does that mean? Does it fit over my other pants? Does it fit over leggings? What does it mean?”

And I did end up asking it about brands, because for things like that—for rain and waterproof and warmth—I really wanted more specific recommendations.

I just got that feeling like this must be such a really interesting time for businesses that are used to significant online sales. Now more and more people are going to be getting recommendations like you did and like I did from these AI tools. And I don't know what that means, but it must be a pretty interesting set of questions for those… I don't know who takes that on—the AI engineers, the marketers, some combination, some new job we don't have a name for yet.

But whatever it is, it's definitely going to be an interesting part of the future of retail, right?

Jen Leonard: I feel like I can't wait till Bootaclavas take off, just because—who knew they were a thing? But anybody who's cold now and asks about it, the Bootaclava industry has not been prepared for what's about to happen to it.

Bridget McCormack: Or maybe they are. Maybe some brilliant AI marketer at bootaclava.com is the one that is feeding our favorite chatbots this information that you're getting. I don't know. That's my question.

Jen Leonard: I like to imagine there is a 20-year-old woman in Wyoming somewhere who's telling her grandfather, who owns the company, “Just trust me. I know what I'm doing. This is going to take off.”

And Grandpa, this Christmas, is just like, “I don't know what she did, but sales have gone through the roof.”

Well, I'll report back. Now my fear is that I'm going to be too hot on the mountain. I feel like I'm going to stand somewhere on this mountain and it's just going to be a puddle of water because I'm going to melt everything around me—but I'll be so happy. So that was my AI Aha!.

What Just Happened

Bridget McCormack: Yeah. Well, that'll be fun. All right, so let's move to What Just Happened.
For today, we are going to update our listeners about the release of Gemini 3. So why don't you walk us through Gemini 3—what it means, what's good about it, what you like about it?

Jen Leonard: Sure. So Gemini is Google's AI offering to the marketplace. It competes with OpenAI's GPT models and Anthropic's Claude models and several others that we don't generally talk about as much, but it's one of the leading AI models out there.
It has caused a really big splash in the world of technology, which is why we're talking about it today.

It really is moving its AI offering from just a chatbot—which was really sort of a similar offering to ChatGPT and Claude—to an AI that acts and is more agentic than its previous offering.
It combines “system two”-like deep reasoning that takes a pause to think with a new platform called “Antigravity” that allows the AI to write and deploy its own software.
And it's currently referred to as the “smartest model in the room” on almost every major benchmark that technologists are using to measure this arms race across the AI models.

So what are the things that make Gemini 3 special?

Its DeepThink mode—we've talked about this before, and we use these reasoning models in our own lives. It's similar to OpenAI's reasoning model. It has a DeepThink toggle that you can select, so it's not doing what the earlier classes of models were doing, which was just spitting out the next most likely token. It's taking its time, critiquing its own logic, and solving multi-step problems before it responds.

I think this is probably the thing I've heard people commenting on most: its native multimodality. It has a “one-brain” architecture. A lot of the other models that we've been using have really been a patchwork of vision and audio tools.

Gemini 3 is natively trained on all of these different modes all at once. So it can watch a video and understand the physics of what's happening, or listen to a podcast and get the tone—not just ingest the text from the podcast.
It's been scoring really well on the benchmarks for visual reasoning—nearly double the ChatGPT 5.1 score. And it sees abstract concepts better than any other model out there, which was a critique of the other models before.

And then I mentioned the “Antigravity” and agentic workflow. It works alongside the model and allows Gemini 3 to act as an autonomous agent. It has direct access to a code editor, a terminal, and a browser. So you tell it a high-level goal, and it handles the implementation of how to execute that goal.

It has a huge context window, meaning you can include massive amounts of information in your prompts. It has a high level of reasoning—the gold standard for intelligence—so it's beating all the other models on intelligence. And it has surprisingly competitive pricing, making it cheaper for higher-end tasks.

So why does any of this matter?

First, we're moving away from the “chat era,” where we're just chatting with an AI, to an era where the AIs are acting more agentic and able to execute on our goals. To me, this is the most interesting part.

Second, I find the competition among these companies to be super interesting.

Google was caught very off guard when OpenAI launched ChatGPT three years ago and somehow managed to pivot its entire business model away from search toward AI. And last month—November—seems to be the month when it started to leapfrog OpenAI.
Just today, an article came out in The Information that Sam Altman from OpenAI issued a “code red” to his employees saying that they really need to catch up with Google now. And if you're following this space you'll recall that when OpenAI released ChatGPT, Google famously put out its own code red saying that they needed to catch up with ChatGPT. So now the tables are turned.

Gemini 3 is now the current leader on what's called “Humanity's Last Exam” benchmark, which is supposed to be un-gameable and is an incredibly difficult benchmark.
So it's a problem-solving, mastermind, artificial intelligence, agentic platform that is currently winning the race in AI. It's a very big deal in the tech landscape and is shaking things up.
Have you had a chance to try Gemini 3? And am I missing anything, Bridget, that I should be sharing about Gemini 3?

Bridget McCormack: No, I think you got it. 
I have been using it. I sort of regularly use ChatGPT 5.1 and Gemini 3.
I don't know that I personally am using it for the kinds of things that would show it off the way others have. Apparently it can do PhD-level science and math, and I just don't do that much of that. But I do think it's pretty excellent. I've been fascinated by the reaction across the internet.

Like, Marc Benioff said he used to only use ChatGPT and now he only uses Gemini.
And I don't know how much of it is real or how much of it is put on, but there certainly is quite a reaction. I'm even seeing more from the CEO of Google, Sundar Pichai, and Demis Hassabis, the leader of their AI lab, online a lot lately. Maybe I just wasn't noticing it before, but it feels like the “Google is back” story is showing up in all kinds of ways.

I never for a minute counted Google out, even though they were a little bit slow out of the gate three years ago with Bing or Bard, I don't know. They had these lame chatbots for a little while that just weren't very good.

But I always figured they'd be here, because they have so much of our data already at their fingertips, right? We're all in Google all the time—our Gmail and our Docs; so much of our lives they have access to.

And they also have a cloud; OpenAI doesn't. They make chips—which the market reacted negatively to NVIDIA's unbelievable quarter when it learned that Google was selling a lot of its chips.

And I think its lab is as good as it gets. I'm just very impressed with Demis and his team. So I have thought all along that Google was in it for the long run.
So I'm not that surprised to see a pretty exceptional model from Google, finally—not only. I thought Gemini 2 was also good. I don't mean to sound like it took Gemini 3 to take them seriously.

I don't know. Are you using it much, and do you find it useful?

Jen Leonard: I have been using it a little bit—not for really complex stuff yet—but I love the interface.

One of my biggest problems with ChatGPT, which seems like such a minor problem, is when you copy and paste from ChatGPT to Google Docs, there's all this cleanup you have to do with hashtags and little leftover things.

Obviously Gemini is a Google product, so it seamlessly integrates with all of our Google stuff, and I just love the interface. I used to not like Gemini's personality. It was just so stiff to me; it didn't feel like working with a “co-intelligence,” as we talk about it.

But I started really disliking ChatGPT's personality. It felt very bro-ish to me.
It's interesting how these models sort of take on the personalities of their founders, because I really started gravitating toward Claude. I think Claude's a really great writer. It always felt more sophisticated in the way it would talk to me than ChatGPT did.

ChatGPT felt a little bit more jokey or flip to me. And now the new version of Gemini, to me, feels like it has found more of a tone that I like.

The multimodality is really nice: being able to just drop in links and videos. I haven't used it enough for that yet, but that felt clunkier to me in earlier versions.
But I think for the strategy part, I totally agree. And you turned me on to the Acquired podcast, which I love just generally.

There's an episode about Google that you sent me and were sort of like, “This is why I would never count Google out.” In addition to all the things that you mentioned, the other thing is they have an enormous revenue stream that they can use to fund all of their AI efforts.
OpenAI and Anthropic—this is the big challenge. They're hugely valuable companies, but they're startups, and they have not yet figured out their business model. Every time I tune in, they're doing so many different things, it's not exactly clear to me what they are as a company. And I get that—they're a startup, so they're trying to figure out what works.

But Google feels much more disciplined to me as a much more mature company. Sundar seems like a more disciplined leader, and Demis is so brilliant. So I don't know; it feels very substantive.

Bridget McCormack: Did you happen to see in The New York Times this morning this op-ed by a trauma physician on the Waymo data that was just recently released?
It's so fascinating. And I, of course, was trying to think about how you would do a similar A/B test with AI systems in law.

So, just to quickly cut to the chase: Waymo has so much data now on the way Waymo cars perform. This physician looked at it for many, many weeks and compared it to data of human-driven cars. There’s really no argument—it's significantly safer.
I think everybody kind of knew that already, but the data is at the point now where he was saying that if this was an ongoing experiment with a new drug and you had this kind of success with it, it would be unethical to keep giving the placebo to anybody, because it's that clear of a public health benefit.

And I feel like we could have similar findings about legal information, right?
If there were a way to collect and release data for researchers to make some sense of the difference between access to legal information—both rights and responsibilities and legal processes—with AI versus without it, it seems like such a significant difference that it might accelerate some adoption across the legal profession.

Of course, that's where my brain went when I was reading about Waymo data.

Jen Leonard: No, I totally agree. We live on a very busy street, so I saw that Waymo article and thought, “If only we could have Waymos instead of the human drivers that drive on our street.” 

They're nightmarish.

But yeah—the outcomes in the legal system. I would love to see some data around AI outcomes.

Main Topic: Predictions for 2026

Jen Leonard: So, moving on to our main topic: we're going to do a fun one today, which is our predictions for 2026 in AI and the law.

Maybe one of them will be starting to capture some data around AI.
Let's kick it off with you, Bridget. What do you think is going to happen as we enter 2026? We're starting to get a little bit more mature in how we think about AI. So what's your first prediction?

Bridget McCormack: My first one is AI-related, but it also loops in a couple of other things I feel like I'm seeing that are being accelerated by AI.

In addition to a faster pace of change across the business of law and the practice of law, we're seeing significant private equity interest in the business of law—specifically in MSOs and ALSPs.

I think the combination of—the interest of capital in our business—and AI's ability to scale some of the solutions that MSOs and ALSPs were providing to lawyers is really going to move us quickly, and I think we'll start to see this even next year, into a platformization of law.
You'll see a combination of those, where MSOs could be the infrastructure layer. They'll be the backbone of legal delivery, offering tools and workflow automation and compliance engines and knowledge management and a lot of useful data analytics. They'll be able to scale those services.

Then I think the ALSPs can become the production layer. They'll be able to bring specialized, AI-enabled services in lots of areas that are inefficient or impossible for individual lawyers and law firms to build or scale, like discovery and contracting and maybe dispute analytics and document workflows.

Private equity and capital are just going to integrate it in a way that might otherwise take a lot longer and scale it a lot faster, I think.

The unified takeaway is that we're going to start to see the rise of these AI-native legal platforms. I don't mean AI-native law firms—we've already seen those in 2025—I mean something larger, like these integrated ecosystems where a combination of MSOs and ALSPs and this strong interest from capital is fueling some consolidation.

Then I think law firms—not necessarily the Am Law 100, but smaller law firms—could plug into these platforms and provide the unique human services that lawyers will continue to provide and which will be, maybe, more and more valuable.

So it feels to me like we might start to see the beginning of a totally different infrastructure of the legal business. 

Jen Leonard: Definitely agree on the private equity and MSOs. I see it in the headlines. I see people like Jordan Furlong writing about it. And then I hear it in my travels, on the sort of downlow—that people are getting calls and talking to people. It's definitely happening and definitely real.

So my first prediction is all about talent in 2026, across the board.

I think talent at the entry level—because of the lag in law schools in preparing people for the changing landscape. I think when new lawyers get to their places of employment, it will soon become evident that new lawyers are not as prepared as people expected them to be. So there’ll be a lot of accelerated professional development happening.

Ropes & Gray just made a big announcement last month that they are allocating 400 hours for their first years toward their billable requirements for first-years to experiment with AI and to learn how to work alongside it. That's a massive investment by the firm. At a firm where I think they have about 1,900 billable hours a year, to allocate 400 of those toward that is huge.

And then, in the lateral hiring space—there's so much lateral movement in law firms. Our team has been running workshops with firms to think about both how firms are signaling to the marketplace that they are AI-forward when they're looking for laterals, and, equally important, how they are trying to suss out from the marketplace that they are attracting partners and senior associates who have AI skills—and what that process looks like.

And our friend Whitney Stefko  and I had a conversation about this at Practicing Law Institute recently about how are PD professionals in law firms are re-imagining their on-campus interviewing process. It used to be, “What's your favorite class?” “Why do you want to come to our firm?”—just a chat with alumni who went to that school, who are partners now—to something very different.

You really need to look under the hood and equip your lawyers with some sort of matrix that they go in with: questions and competencies that they're trying to evaluate during interviews that are now happening.

I saw an ad for a firm on LinkedIn the other day—it was in November, before Thanksgiving—promoting summer associate positions for 1Ls. They have not even taken their first set of exams yet, and you're trying to assess all of this so early.

All the way up to the head of your AI efforts. Those positions were not very well-designed in the first couple of years, but people were leading them and making progress, and then either becoming frustrated or being attracted by higher salaries or better packages at other firms and leaving—and losing all of that energy and progress.

So I think firms are going to be much more intentional in 2026 about structuring those positions well to keep that talent in-house and having a better sense of what they're actually looking for. That has to be a killer if you've built a program like that and you lose the leadership of it.

I think we're at a stage where it's become clear to me that AI is here to stay, and you need somebody who is an expert in change management and can translate across the practice side and the business side, and should sit at the C-suite level to oversee all of these things. So talent, to me, is key in 2026.

Bridget McCormack: Yeah, that makes a lot of sense. And I don't think anybody knows as much about that as you do, given all of the interactions you're having with so many law firms. I think you follow this market—the talent part—even more closely than anybody else I know.

My second prediction is like a tiny little addendum to “talent, talent, talent.”

Maybe this isn't a prediction; maybe this is, “I think I have great ideas, so it should be a prediction—somebody should do it.” I don't know if that's what we mean by predictions.
But I keep waiting for AI legal twins. Like, why have one Lisa Blatt or one Paul Clement when you could have two or five or ten at your law firm?

I don't mean just a chatbot, but a fully realized digital version of that superstar lawyer's legal mind. Imagine taking all of their work—their writing, their judgment, their instincts—even having them talk through legal problems with a tool you were training. You'd have it listen to all their oral arguments, their internal memos, their strategic decisions, even their rhetorical style.

Then you could extend that lawyer's work beyond hourly work, right? Humans have this limit: we have only so many hours in a day; we get tired; we get hungry.

But the digital-twin Lisa Blatt could draft 24 hours a day, spot issues, supervise real Lisa Blatt's work, ask Lisa Blatt-like questions of Lisa Blatt’s drafts, run strategy sessions, and even act as a partner or mentor to more of the up-and-coming lawyers who have to learn how to do things in new ways.

The business model feels almost… There's so much creativity around it. You could imagine subscription versions for a Paul Clement twin, or use cases like brief review or moot court prep.

“I want Paul Clement's twin on my moot court panel, and I'm going to subscribe to that for whatever the…”

I don't know how to do pricing. We're going to have to have Jae back on to tell us what to charge for a Paul Clement twin.

Eventually you could imagine a marketplace of elite digital legal minds available.
Obviously there will be hard questions with this, but I just feel like there's too much upside potential. It's one of these ways in which the technology just can do things that we humans can't, because we're limited by biology and time.
So add that to your list of talent things to watch out for. And when you find the first digital twin, call me. I want to know.

Jen Leonard: I totally will.

I want to make a digital twin of you. I think there already are digital twins. I think maybe you're just messing with us with the prediction because you're everywhere. It's just a matter of when we find out that that was true all along.

Since the beginning of the GenAI era, my favorite use case has been using it for more formative coaching—especially in law school and the junior years. I think law firms in particular, because of the way that they store their documents, are so well-positioned to do this.

They store them by lawyer number. So if Bridget McCormack is a partner, you can customize a “Bridget McCormack coach” for your juniors and they can access her anytime, and I think that's amazing.

So maybe I'm going to replace my second prediction with a new one, because you've inspired me to think differently—which is: I think that 2026 will reward—and I'm going to use my own company's name here—the creative lawyers.

There's just so much to be won by creativity. We run creativity workshops all the time, and you can sort of lead a horse to creativity, but you can't make them always be creative.

When people are creative, the things that they can come up with and now be able to do are just astounding. And so I think now you have these—I'm going to channel our friend Rachel Dooley here and say “soft walls,” where there are spaces to be creative because of the regulatory liberalization and the technology and the people who are wanting different career paths.

So I think that 2026 will belong to the creatives. That will be my short prediction for 2026.

Bridget McCormack: I love it.
All right, so my next one is a little bit closer to home.
I see such potential in the technology for alternative dispute resolution. As you know, obviously, I've spent a lot of time thinking about that.

I think there is so much value to be gained from building an AI co-mediator. A lot of times in mediation, there's a lot of upfront work that has to happen for the mediator to be effective.

They really need to spend some time understanding not just where the parties disagree, but where the really stressful parts of the disagreement are—which might have very little to do with the legal questions, right? Sometimes they're emotional questions, depending on the kind of case.

Mediators spend a lot of time getting parties ready. They map out all of these things individually with the separate parties, and they can do better or worse jobs at that depending on how much time they have—what a good listener they are, how well they can pick up on frustration and emotion, and the space between what someone says they care about and what they really care about. There's all this human stuff that really matters.

But a lot of the preparation work that gets the mediator to the point where they can then find the sweet spots where there is some overlap—and where they might be able to move people—can definitely be done by what I'm thinking of as an AI co-mediator.

So before the mediation begins, each party gets to work with the co-mediator first to make sure that they feel understood and heard, and that their goals are very clear. It would give the human mediator a real advantage in doing the work that the human mediator is going to be able to do better than an AI—in my view, at least for a long time.

I think it would not only save a lot of time, which translates often to money—depending on what kind of mediation we're talking about—but I think it would just give the parties and the mediator so much more clarity and insight and a stronger head start, that I think it likely makes mediations significantly more successful and parties significantly more satisfied.

So I think every mediator is going to have an AI co-mediator to help them with the parts of the work that can be done just as well, and therefore allow them to really focus on the parts that the human is going to do so much better.

Jen Leonard: Very cool. I love that. I'm going to say that I think we'll start to see the emergence of more rigorous methodologies for measuring ROI.

Meaning, I think that everybody out there who is starting pilots will start at the beginning with better measures of what they're trying to get out of a pilot and more systematic ways of gathering that information and assessing whether it's successful—by comparing it to what they were trying to get out of it in the first place which seems obvious, but we're not really accustomed to that in this industry. I think we'll start doing that more in 2026.

I think we'll also see deeper use cases versus just having everybody get onto the platforms to refine their emails and summarize documents—taking it a little bit further.

In the same vein, I think we'll see more frameworks developed for things like: What does competency look like when using AI? How do we supervise output to make sure that the underlying logic of the arguments is sound, that there aren't hallucinations—those kinds of things?

I think that's all good. My hope is that people don't feel tethered to some sort of industry standard that stifles innovation and doesn't allow them to continue to explore.
Because I do feel like lawyers have a tendency to adhere to one thing, and it's biblical, and we can only use this one framework forever.

I’ve sort of enjoyed this liminal period where we've all been trying to figure it out, and I don't want that energy to go away. My one negative prediction for 2026 is that we glom onto one thing and that is the end of our exploratory phase.

Bridget McCormack: Yeah. That is a way we do it, we lawyers.

Okay, my final one is: I think 2026 is a year where we will start to see some very experimental, opt-in—both parties have to agree—court AI resolution for certain low-stakes, high-volume matters.

Again, where both parties choose it. Courts can therefore structure experiments in a way where parties can have an appeal to a human judge if they are dissatisfied with it.

But it’s a way to start figuring out where AI adjudication can help parties and therefore help courts manage dockets that have been hard for them to manage for a long time—at state courts, at least.

I don't know where we'll see it, but there will be some innovators out there who will stand up some of these pilots. We'll start to be able to collect some information—qualitative and probably quantitative—as a result of some of those pilots.

Call it ODR 2.0—the new frontier of online dispute resolution—which ultimately, I think, will be a good solution for certain kinds of people, parties, and cases in the pipeline.

Jen Leonard: All right. Well, that's a wrap on our first-ever predictions episode. We'll have to touch back next year at this time to see how we did.

But one thing's for sure: the world of AI and the law is not slowing down.
We’ve had a great time this season exploring it with all of you. We hope you'll join us again in season three.

In the meantime, we wish you a happy holiday season with you and your loved ones. And, Bridget, thank you so much for exploring this fascinating world with me.

Bridget McCormack: So much fun. Looking forward to season three. Nobody I like to collaborate with better.

Jen Leonard: Take care, everybody. Be well.

December 23, 2025

Discover more

How AI Supports Access to Justice in Los Angeles Courts

Making Talk Cheap: Are AI Tools Devaluing Legal Writing?

How AI Tools Can Close the Justice Gap with Sateesh Nori and Tom Martin