AI Revolution in 2025? What Lawyers Missed Over the Holidays

 

 

Summary

In this episode of 2030 Vision: AI and the Future of Law, Bridget McCormack and Jen Leonard explore how generative AI is revolutionizing legal practice. They discuss major December 2024 advancements like OpenAI’s "12 Days of Shipmas," Google’s Gemini 2.0, and Amazon’s Nova Suite. From fine-tuning AI models for legal use to integrating tools like Deep Research, this episode highlights how AI is streamlining workflows, enhancing accessibility, and reshaping the legal profession. The hosts also share predictions for 2025, emphasizing the rise of AI agents, increased collaboration, and the growing need for lawyers to adapt to AI-driven change.

Key Takeaways

  • AI Aha! Moment: Google’s Deep Research Tool offers powerful contextual insights, delivering comprehensive legal research and drafting capabilities that rival traditional methods.
  • Fine-Tuning for Legal Applications: Customizing AI models with niche datasets ensures confidence in outputs and helps address specific challenges in legal workflows.
  • OpenAI’s 12 Days of Shipmas: OpenAI’s December updates introduced advanced reasoning models, ChatGPT Pro, and groundbreaking voice and video features, setting a new standard for generative AI.
  • Google’s Gemini 2.0: Gemini 2.0 combines multimodal inputs, advanced deep research, and seamless integration with Google Docs, redefining AI capabilities for lawyers.
  • Amazon’s Nova Suite: Amazon’s entry into generative AI with Nova Suite signals a major shift, offering tools for legal professionals to enhance content creation and improve efficiency.
  • AI Predictions for 2025: This year will see AI agents as integral team members, greater public access to legal information, and lawyers adapting to an accelerated AI landscape.
  • Collaboration and Innovation: AI will foster collaboration between lawyers, clients, and technology, empowering the legal profession to serve more people efficiently.

Transcript

Jen Leonard: Hi, everyone, and welcome to the next episode of 2030 Vision: AI and the Future of Law. I am your co-host, Jen Leonard, founder of Creative Lawyers, joined—as always—by the fabulous Bridget McCormick, CEO and President of the American Arbitration Association. In every episode, Bridget and I are here to break down some of the most important developments in the AI landscape and draw connections for our audience of lawyers as to what this means for the legal community. Hi Bridget!

Bridget McCormack: Hi, Happy New Year! It's great to see you.

Jen Leonard: It’s great to see you too. In our episode today, we are kicking off the new year by filling people in on the end of last year, which I think most people probably missed. (It was hard even for me to keep track of, as somebody who follows this really closely.) A whole lot of stuff happened in December 2024, and we want to share a bit of that with our listeners and discuss what it might mean for legal.

So we’ll start, as we always do, with our AI Aha!’s—the things that we’ve used AI for since our last episode that we found particularly magical or interesting. Then we’ll share a definition of an AI-related term we think would be helpful to know as we’re all getting our arms around the new jargon we need to understand to infuse AI into our lives. After that, we’ll dive into our main topic and get everyone updated on some of the end-of-year releases from the major tech companies and what that means for legal down the road. And we’ll close by offering some of our predictions for the year ahead. (I think it’ll be a busy year—nobody would bet otherwise at this point!) 

Jen Leonard: Maybe, Bridget, you could kick us off with your AI Aha! from the last couple weeks?

Bridget McCormack: I will! Every once in a while I do a compare-and-contrast across the models I work with—which, as you well know, are usually Claude and ChatGPT—but I added Google back into the mix. I have a Gemini subscription, but I just haven’t used it as much as the other two. But when Google’s Deep Research tool came out (a new model that comes with your Gemini subscription), I put that in the mix. 

I asked the exact same question—or rather a paragraph of questions, which is more like what I usually do. I like to use a legal question because, as we’ll talk a little more about later, there’s an awful lot of law that’s not really in the public domain. These LLMs haven’t necessarily been trained on all of that law. So it seems likely you might get different answers across different models. I like to compare and contrast to see how I feel about how they’re doing. And the Google Deep Research tool absolutely blew me away. I was asking a question about the unlicensed practice of law (UPL) and large language models—like how state laws might treat people who ask legal questions of LLMs, and the LLMs themselves or the technologists who built them. The differences across the models were pretty interesting.

Do you sometimes get this Claude mode where it says “I’m only giving short answers” or concise answers? 

Jen Leonard: I got frustrated with that this morning…

Bridget McCormack: I got that this morning too! Claude sometimes switches to a “concise mode” where it just gives me five bullet points. I’m like, “No, no, no—I want the old Claude! Talk to me with big words!” I don’t like the concise mode. So maybe it wasn’t a fair fight for Claude today, but Google’s Deep Research product is so far and away better that I now have Gemini up all the time, whereas before I used to just pull it up once in a while when I wanted to compare outputs across models. (We’ve talked before about how the NotebookLM tool is an amazing tool.) 

What Deep Research does is let you ask a question or series of questions—mine was three or four questions about UPL and LLMs—and then it comes back to you with how it’s going to approach your research project. It says, “Here’s what I’m going to find out first, then I’ll do this, then I’ll figure out that, then I’m going to write it up, and then I’ll do this,” and then it asks, “How do you feel about my plan for the research?”

You then get to work with it if you think it skipped a step or it’s doing a step you don’t really need. For example, it thought it should answer my question as to Michigan first. I said I don’t care as much about Michigan—I care about the whole country (like, how is this different across the country?). It took that edit and created a new plan. Then it says, “Okay, go do something else. You can go look at other windows while I’m busy doing my research for you,” as if it’s a research assistant that took your pile of folders and went back to its office. And I did! I went back and did other things.

I checked back in when it was done, and it was like, “Okay, I’m done with the research,” and it had a fully drafted memo with footnotes—live links in the footnotes, so you could click each link and check out the source. Some of those links, frankly, were ones I never would have found on my own via Google search (going through all those blue links, which I never do anymore anyway). It would’ve taken me forever to locate them, and there was valuable information in them. And then you can immediately just put the entire thing into a Google Doc.

It’s a stunning tool, and it really made me think: I don’t know how we’re all not going to be using this pretty regularly. It’s such a time saver. And this is even in an area where—again, if you’re a lawyer—you’re going to have to go look for some law that it doesn’t have access to, right? (Specific state statutes, for example. It does have some through secondary sources, but you can’t rely on the legal content.) But the surrounding content, the context, was unbelievable, and the way it framed all of the things that you or I might want to be thinking about was as good as anything I could’ve asked from an associate or a clerk or somebody working with me on a project. It was a very big “Aha!” moment, I have to say.

Jen Leonard: I’m so excited to hear about it, because it was the most interesting release in December and the one I’m most excited to try. I had difficulty accessing it, but we just figured out how I can access it—so I’m looking forward to trying it. 

But the piece that excites me maybe even as much as the research itself is the integration with Google Docs and the ability to immediately convert it into something usable. I’m so persnickety now, and so spoiled, that when I copy and paste from Claude or ChatGPT and it has, like, the pound signs or the asterisks in the formatting, and I have to go through and find-and-replace every single time… It sounds so dumb, but just eliminating that friction is an immediate value-add.

Bridget McCormack: Yeah, it’s pretty incredible. I can’t wait to hear what you do with it—you’re gonna be really excited, I promise. How about you? What do you have for us this week?

Jen Leonard: My AI Aha! has a lot of the same themes as yours, in the sense of feeling more and more like you have a team of intelligent people around you—but in LLM form. I’m working on a writing project and trying to organize the approach, structure my work, and brainstorm different ideas. Like everything else, I’m not using AI as a substitute for actually writing the content, but I was imagining what it would be like to be an author who has a team of research assistants, a thought partner, somebody who’s a really good project manager.

So I worked with ChatGPT, and I used the same prompts on GPT-4.0 and on OpenAI’s new O1 model (the more powerful reasoning model) to see the difference. It wasn’t a complicated question that O1 would even really be needed for, but I was curious. I gave it the parameters of the work and then asked: How should I structure my work over the timeline allotted? What are some areas I might explore that I’m not thinking about? And I specifically asked it for other AI tools (not ChatGPT or Claude) that I might not be aware of, which could be helpful.

Both versions gave helpful responses, but O1’s were better. O1 offered me more context in its response about how it came up with the AI suggestions it gave. Both of them surfaced some new tools I hadn’t heard of before. For example, one tool called Semantic Scholar. 

It looks a little bit like the SSRN interface, but it feels very robust. It claims to have over 220 million academic papers across all different disciplines. I ran a few quick prompts into it, and it was great. It has useful features: it will automate your citations (always another friction point), it will create online folders that organize your research, and it shows you the “highly influential” citations for a paper.

And I thought that was interesting because it’s not a GenAI product—it's a “traditional” AI product. But part of this era feels like, because we’re so excited about GenAI, we’re also starting to think about how we can use these other pre-existing AI tools better. I was really excited about Semantic Scholar, and also to continue using GenAI. And now, hearing about Deep Research from you, I’m excited to use that as this project unfolds.

Bridget McCormack: Yeah, I think it’s going to be really useful for this particular project. You’re going to work really quickly, I’m afraid—you’re gonna need another project soon!

Definitions: How Lawyers Can Leverage Fine-Tuning

Jen Leonard: Alright, moving on to our next segment. Bridget, I’m going to read the definition of this term, but I know that you and your team have actually worked with this technique. I’d love to hear your thoughts on what it looks like in action. 

Our definition for this week is fine-tuning, which ChatGPT defines as the process of adjusting a pre-trained generative AI model on a specific dataset to specialize its performance for particular tasks or industry. So, some obvious applications or implications for legal… But what has your experience been with fine-tuning, and why should lawyers care about it?

Bridget McCormack: Fine-tuning is a way you can produce generative AI tools for lawyers (or any legal use case) that give people some confidence in the output. It’s when you take a niche data set—like a very specific type of content—to really teach a model about a particular field. 

We did it when we created our Clause Builder AI tool. We have a dataset of perfected arbitration clauses (clauses that courts have upheld because they’re fair), and we fine-tuned our tool using that dataset. 

We did the same thing with our Generative AI Scheduling Order tool. For the most part, the dataset for that tool is Zoom transcripts of scheduling hearings—but we’ve done so many of them that we have a dataset to fine-tune the tool on, so it knows what the elements are of a scheduling order and it can put that together without hallucinating.

Fine-tuning is one of a number of techniques that engineers, technologists, or even just lawyers building a tool for use in their own firm can use to give themselves more confidence about a model’s outputs. 

We also built an HR chatbot using all of our HR information. Our HR team is amazing, but if there are questions that can be answered by the chatbot, that obviously frees them up for higher-level tasks. You can fine-tune there as well. It’s not only for lawyers, of course—fine-tuning would be just as useful in many other fields. But it’s a great technique for adding confidence to a generative AI tool.

Jen Leonard: I imagine in medicine they’re using this for reading scans, creating different datasets for different kinds of diagnostics. And as you said, it’s not only across an entire field, but even within a single organization you can fine-tune your GPT-based technology in different ways for different applications. 

Main Topic: Recap of Major AI releases in December 2024

Jen Leonard: Great. Well, why don’t we dive into our main topic, because it’s a meaty one this week—as it feels like they all are every week!

Our main topic this week is all about what happened in December of 2024. If you’re like most people, you were probably—professionally—trying to wrap up Q4 work or end-of-year commitments, and personally buzzing about with holiday activity, so you likely weren’t paying attention to a lot of tech releases. But a whole lot happened—many, many releases from the major tech companies. We won’t mention every single release from December.

(Partly because we can’t even keep track of them all, and partly so as not to confuse people with too much detail!) The point really is to make clear that anybody who still thinks that the land of AI—or the pace of AI—is going to slow down or peter out really has another thing coming. It looks like 2025 is going to be even busier, if possible, than 2024. So any remaining skeptics out there in your lawyer life that you want to convince, you could share this episode to show all the things happening that you might not even be aware of.

What do you think, Bridget, about December?

Bridget McCormack: Yeah, I think that’s the right introduction. We do have a list of all of the releases from the major companies, but a number of them aren’t necessarily relevant (and I haven’t had a chance to experiment with many of them yet). I actually had this idea that over the holiday break, with all this free time, I would experiment and figure out all of the things that had been released in December. And…that didn’t happen. So I am now scrambling to learn all of the new things that are available to us, even as I handle everything else that’s back on my plate.

But I do think the point is the one you made: the time for skepticism is far behind us. It’s time to just figure out what this is going to mean for us and our profession, and get on board with being the authors of what it will mean instead of just hoping for the best. I really want smart, thoughtful lawyers who care about what we do to be driving what this means for us. And I hope that when everybody gets read into everything that happened just in December, you’ll feel some urgency to figure out your plan for 2025.

Jen Leonard: Well, first, I’m really happy to know that you weren’t spending your whole holiday break playing around with large language models—that you were enjoying your friends and family! But we are going to talk about three major tech companies. The main two are OpenAI and Google, and we’ll also mention Amazon at the end, just because we think it’s an interesting development. I’m going to start with the OpenAI releases, and then toss it back to you, Bridget, to share a little bit about Google.

OpenAI’s 12 Days of Shipmas: What It Means for Lawyers

Jen Leonard: OpenAI had this rollout in December that it called the “12 Days of Shipmas” (S-H-I-P, because in the tech world they ship new products). So every day of the 12 Days of Shipmas featured a new product from OpenAI. I am not going to read all 12 products, but just know that each and every one of them was interesting and unique and has all sorts of applications and implications. 

Some of them focused on video generation (which will be particularly interesting down the line), and others were designed more for technologists and app developers, and might not have an immediately apparent use for us. But we wanted to focus on a few.

So on the first day of Shipmas, OpenAI gave to me… O1 and ChatGPT Pro. This was the full release of the O1 reasoning model, which we’ve talked about before. O1 moved from the previous approach (using user feedback to strengthen and refine outputs) to being a true “reasoning” model that focuses on the internal reasoning the LLM does to reach an output. This was the full release of that model—it offers faster, smarter, and more accurate AI responses.

And OpenAI also introduced ChatGPT Pro, which is a $200 per month subscription that provides access to advanced models, including O1, O1 Mini, GPT-4.0, and advanced voice mode. I will say, from the commentary I’ve seen about this, that for most people you don’t need the $200-a-month subscription. The $20-a-month ChatGPT Plus will suffice. But if you’re using video generation applications, maybe it’s worth it. Is that what you’ve heard, Bridget?

Bridget McCormack: Yeah, I’m trying to figure it out. My understanding of what it gives me access to…I think I already have access to most of those things. The one thing I haven’t played with (maybe because I don’t have access) is Sora and the video models. So I don’t know if that’s the big difference, or if there’s some larger context window—I can’t quite figure it out. But I don’t know anyone who has said “you definitely need ChatGPT Pro” yet.

Jen Leonard: And I would say for lawyers: don’t worry about ChatGPT Pro, but do make sure that if you’re using ChatGPT, you’re on the $20-a-month GPT-4.0 version. The number of lawyers I talk to who still use the free GPT-3.5… It’s impossible to have a conversation about what’s actually happening, because 3.5 is an obsolete technology at this point. Stronger models are coming with different subscription tiers, but Pro is probably not relevant for most people right now.

Jen Leonard: The next release that I thought was interesting was on the sixth day of Shipmas: Advanced Voice with Video, and they also released Santa Mode, which was hilarious. Did you use Santa Mode at all?

Bridget McCormack: No, but I heard a bunch of other people’s Santa Mode clips and they were hilarious. Did you do it with your kids?

Jen Leonard: I did. I tried it with my kids. And also—this was a user error situation—I couldn’t figure out how to leave the default Santa Mode. 

So I would be asking questions like, “Could you help me brainstorm how to frame this email?” and it would respond in Santa style, like, “That’s a great question. Ho ho ho!” Every single response I got was in Santa’s voice. It was a fun little mode where Santa talked to you. My kids thought it was amazing. Even my son, who no longer believes in Santa, I think now re-believes in Santa because we were able to talk to him on ChatGPT!

But also on that same day, this Advanced Voice with Video pairs ChatGPT’s voice capabilities with video generation. So…this again goes to the whole point about multimodal ways of interacting with generative AI. These are just going to continue apace, I think. We’re not just interacting via text anymore, but with video and voice as well.

Day eight of Shipmas was ChatGPT Search. This was the unveiling of ChatGPT’s new browsing/search mode—a tool for retrieving answers from web sources, optimized for speed and relevance. This seems to be OpenAI making sure that Google doesn’t dominate the race toward AI dominance. (I know Sam Altman has said he’s not interested in making a better version of Google, but certainly they need to hedge their bets because of the things you’re going to talk about in a minute, Bridget—things we’ve already touched on.)

I thought this was interesting. I didn’t even hear about this until we were prepping for the podcast: on day ten, they released ChatGPT Phone Calls, a feature that allowed users to call ChatGPT from their phone for up to 15 minutes for free through a designated phone number. I think this is really interesting. Having worked in government, I know that a lot of times the general public cannot access the cool new solutions you have—because they don’t have computers or don’t use the internet. I love the idea of people being able to call a generative AI, ask questions, and have it use voice mode to respond. What did you think about that?

Bridget McCormack: Yeah, I think it’s fascinating—not only for people in digital deserts or people without technology, but also for people who just don’t want to get comfortable with technology. I mean, I was telling you that my mom passed away in February and I moved her husband to Michigan recently. He’s 81, in a new community for the first time, trying to figure everything out. And I keep sending him links like, “Here’s how to find a dentist,” or “Here’s how [to do X].” And he has no idea how to navigate a website.

I think for him, this is going to be a game changer. I can’t wait to teach him how to do this. When I was a kid, we used to call—what was the number?—we called information to get a phone number. That’s not a thing anymore; you can’t call information. But maybe you can call ChatGPT, because he can’t figure out where to find a phone number on a website. (Once I realized it, I was like: it is kind of complicated. You have to click the three bars at the top right, find “Contact Us”… It’s complicated.) But calling a phone number is something he’s good at. So I think there are a number of people for whom this will be great. I’ll be really interested to see if it takes off in a new way. It’s kind of cool.

Jen Leonard: I also see all sorts of applications for companies—for customer service. I hate chatbots, even the ones that are GenAI-integrated. If I’m on a website and a chatbot pops up, it’s almost like you have to recondition people to engage with chatbots because they’ve been so bad to date. But talking to a human on the phone is often preferable. So the stronger the voice capabilities get, if you’re able to interact that way, I feel like there are lots of opportunities for companies with their customer service.

On the very last day of Shipmas (day 12)—we won’t get into this too much because we don’t have access to it—but OpenAI released O3 and O3 Mini Preview, a preview of new models showcasing advancements in AI reasoning and interaction. And it seems like it was just August or September that we got access to O1 (maybe later—maybe October), and not even two months later you already have a preview of more powerful models. These O3 models are built on those earlier reasoning models but have significantly enhanced capabilities. I don’t know what those enhanced capabilities might be. Do you have a sense, Bridget?

Bridget McCormack: You know, I watched the last day of Shipmas and then I read what everyone who had advanced access was saying about it. There are definitely a number of commentators trying to figure out if we can call that AGI, right? Like, can we call it superintelligence—basically, are we kind of there yet with O3? And I think what OpenAI said was they were looking for people to help red-team it for the next month (you could sign up to red-team it with them), which I don’t think I’m qualified to do. I don’t have anything complicated enough to ask it, to really figure out if it’s answering correctly. But it would be fun! They intend to release it to everybody else at the end of January, which is really right around the corner.

I read Sam Altman’s blog post from yesterday about things happening quickly. And—as always happens—he had one of those tweets, I think yesterday… Did you hear the six-word tweet about superintelligence? It certainly sounds like they think they’re pretty close to AGI. So I don’t know if O3 will definitely be AGI, but it’s going to be something pretty significant. If you’re building technology, you need to know that this is happening.

Jen Leonard: I’ve seen this flurry of social media posts about contests for mathematicians to test the models and figure out what their capabilities are, and commentary like: have we reached the point where human capacity to actually assess these models is insufficient (or at least in very limited supply)? Which is really…weird.

Okay, so that was OpenAI’s December—which is a lot to digest. But tell us, Bridget, what happened on Google’s side of this arms race in December?

Google’s AI Breakthrough: Why Lawyers Should Care

Bridget McCormack: Yeah, so Google did not structure it in a “12 Days of Christmas” format, but they did release some pretty significant new products worth focusing on. Again, I won’t focus on every single one, but I’ll highlight a couple.

The first was: Google’s Gemini 2.0 was released, and that’s its most advanced AI model to date. It has “agentic” capabilities that can help developers and businesses and individuals who use it. It supports text, audio, and video inputs and outputs, I believe. And it acts agentically (it can go and use tools like Google Search on its own).

I heard about Deep Research from The AI Show guys (Paul Roetzer and Mike Kaput), and they seemed blown away by it. When those two are blown away by something, I felt like, okay—that’s one I’m gonna check out quickly. And I have to confess that I’ve used the DeepResearch mode more than Gemini 2.0 itself. I understand 2.0 is performing across all metrics as well as the top models from the other companies. So I don’t think Google has caught up in the public’s imagination to its competitors in generative AI, but I think they may have caught up technologically. That’s a pretty big deal, I think.

Google has some incredible advantages long-term, because so many of us use their products for so many aspects of our lives (Docs and Gmail and all of the ways in which our lives are already kind of infused with Google). They have an awful lot of data and an awful lot of integration in our lives. So it felt to me like just catching up was going to be important, but once they do catch up technologically, they have an opportunity to capture a lot of users, given how much we already do with their other products. (And YouTube, of course, is a Google/Alphabet platform, so they have a lot of data there, too.)

The next one: we’ve talked about Google’s product NotebookLM before, and I know you’ve used it and I’ve used it to create podcasts. You can feed it a book or an academic paper and say, “Create a podcast,” and it creates this extremely accurate and very realistic-sounding podcast where two hosts talk to one another. There’s even sort of small talk and laughing, and they interrupt each other, and they deliver the content of the material. It’s actually a fantastic tool for getting content to people who want it in different ways. For example, at the AAA we have a bunch of educational resources that are really valuable to lawyers and to people who want to become arbitrators. We want to use this tool to provide that content in audio form so people can listen to it when they’re going for a walk or in their car—y’know, the ways that we all now consume content.

Okay, back to the release: In December, Google enhanced NotebookLM in all kinds of new ways, with new features like audio overviews. I heard someone say on a podcast I listen to that you can now interact with the podcast hosts. So you can ask them questions about the material they’re talking about—you can insert yourself into the podcast and interact with the material that way. They also increased the capacity and added ways you can customize it and collaborate across your team, to make it more useful instead of just a cool demo tool. I think it’s become one where it’s going to be hard not to find use cases, if you work in an organization that has more than one person (which most of us do). Have you played with NotebookLM recently?

Jen Leonard: I haven’t since the latest release, but I do think one of the challenges all organizations have is getting people’s eyes on information—getting people to read emails, getting people to read newsletters. There’s just too much going on in your inbox. And this feels like a new and magical way to give people information. Even if you’re thinking about, say, on the law firm side: you’re in marketing, you send out those client alerts to people… Probably some of your clients read them and some of them don’t. But if you send a podcast, and your client can listen to it on the way home rather than having to sit at their desk longer to read it, that’s a big differentiator to me.

Bridget McCormack: At least for me, being able to listen to things while I travel is one of my life hacks—because I travel all the time. So on the way to the airport, waiting to board the plane, then on the plane…those are times when I consume a lot of information. To be able to put even more of the things I have to consume into an audio format that works so well for me is pretty great.

The next one I want to mention briefly—even though I don’t really understand anything about how it works—is that Google introduced this AI weather model, or they announced it. The DeepMind team (their research division) launched a product called GenCast, which is an AI weather prediction tool that apparently outperforms every other weather forecasting model out there, significantly. Like, it can make really specific predictions significantly far out—I think up to 14 days or so. I guess what it does is it adapts to the Earth’s geometry and generates complex probability distributions for future weather. And it provides these really accurate forecasts, especially for extreme weather events, which is a super useful, impactful, life-saving use case for the technology (which I love). 

The sooner we see ways in which the technology can really make an enormous difference in health and safety, the sooner we’re all going to be like, “Okay, we’re doing this now—we’re all getting on board, this is happening no matter what.” The sooner we see things like GenCast, the better.

So I thought that was a really exciting story. I only heard about it; I haven’t seen it operate, so I can’t tell you much more about it. But those were the ones I thought were worth mentioning. Again, there were a number of others. There was a lot of talk about the quantum chip that Google released—I think nobody knows how to use that yet, but it’s obviously going to be impactful when quantum computing is something we all do. It’s very cool that they’re working on it, even though no one can really use it yet. And there were a number of other releases as well. But it was a pretty exciting month for Google.

I think OpenAI got the attention with the 12 Days of Shipmas (that was very smart marketing—every day I was like, “I wonder what it is today!”). Otherwise, in December, people are doing other things. So OpenAI grabbed the spotlight, but I think Google’s specific products are extremely strong. I see Google creeping up in this race a little bit. 

Jen Leonard: Google struggled a bit and was blindsided by OpenAI, and didn’t figure out early enough how to leverage their strengths in Search and their Google Work suite. It feels like that’s starting to shift a little bit.

Amazon’s AI Nova Suite: What Lawyers Need to Know

Bridget McCormack: And then Amazon entered the chat in a new way—in its own way, right? Instead of just being a Claude investor. I don’t know that much about what Amazon is up to, but we should give folks a brief overview.

Jen Leonard: Yeah. We won’t spend a lot of time on this, but we thought the big takeaway from Amazon’s news is—again—to counter any skepticism out there that AI is just hype or that it’s localized to the “same big tech companies” we’ve always been following. Amazon announced the release of its Nova suite, its own family of models. This suite included four text-generating models, Nova Canvas for image generation, and Nova Reel for video creation.

I have not been following this very closely (and I don’t really plan to—I can’t handle any more models!), but I heard (I think it was Paul Roetzer and Mike Kaput, again) talking about Amazon as something interesting to watch because of the internal transformation happening at Amazon versus our consumer-facing scramble to figure out these models. They suggested keeping an eye on how Amazon is posting for jobs—specifically, the quantity of AI-related jobs they’re posting for in the years ahead as they build out internal AI capabilities. That seed was planted in my mind, and I’ve been thinking about using that as a bellwether for how other industries might be impacted from a jobs standpoint. (I know that’s different from the model releases, but to me it just felt like: Amazon’s going all-in on AI too.)

Bridget McCormack: Yeah, that’s really interesting. I think a lot of the folks we listen to have been talking about that recently. And also, in Sam Altman’s blog post, he said 2025 is the year that agents enter the workforce—like we’re all going to have agents basically as members of our teams, right? (Although I kind of already have one with DeepResearch, so…)

Jen Leonard: Yeah! And there will be job disruption—we talk about it on this podcast and others. But I was also thinking over the holidays about how almost every single person that I collaborate with or work with or talk to is so overwhelmed by the amount of work that they have, that I just think there are huge opportunities to grow your ability to feel more capable of handling what you want to handle in your professional life without constantly being overwhelmed (long before we even get to the job disruption part and what that means). I don’t know… Is that overly optimistic?

Bridget McCormack: No, I feel very confident of that. At least at the AAA, we all have way too much to do. And I feel like there are lots of things that we do now just because we’re used to doing them, that we might not have to do eventually. We can keep doing those things that we really do well and that make a difference to the people we serve. It’ll be amazing when everybody can focus on those things. I think they’ll be happier. I think the users (our clients) will be happier. I think it’s going to lead to a better way of serving our users, and probably serving more users. I know that there are certain parts of jobs that people probably won’t do anymore, but I’m not worried about fewer jobs—I think we’re just going to be able to do more things, and better things.

AI in 2025: What Every Lawyer Needs to Know

Jen Leonard: So, what does all of the December activity mean for lawyers, Bridget? We talked a little bit about why these companies might be making these releases (or why not). Maybe we can dive into: why should we care as lawyers that these tech companies are releasing all these things? They’re not going to be immediately applicable to the work most of us do, so what’s the upshot to you?

Bridget McCormack: I think they are immediately applicable to a lot of what we do. You’re correct that we still don’t have a frontier model tool that can basically produce a legal memo that you can just go file in a court—we don’t have that. Although, as we know from Adam Unikowsky’s work, if you confine it to a set of briefs, it can do a pretty great job writing an opinion or a decision. And if you give it a universe of cases and tell it to stay within those cases, it can do a pretty good job of writing a brief. So it’s kind of close, to be honest.

Putting that aside: even if it does a pretty good job, no lawyer should turn anything in—or give it to a client—without checking every piece of it. That’s your job. But it’s doing this contextual work that I think is just as important. Lawyers don’t only answer the specific question of what the law says you can do about X, Y, or Z. It’s also like: why do we care if we can do X, Y, or Z? What’s the context? That’s the thing I think the DeepResearch tool is incredibly good at. When I tried it over the weekend, it pulled from 59 different sources—it gives you all the sources. And some of those were really important secondary sources from state bars and other blogs and writers that I don’t know how I would have found on my own. I just don’t know how I would have found them.

So I think it would make me—a person not practicing law right now, but if I were practicing—using these tools would make me a better lawyer right away, because of that contextual support that there’s no way I have time for. And I don’t think I would burden an associate with it, you know? You can’t just say, “I really want to understand the context of my problem,” because we’re busy, we’re just trying to get things done. And I think it allows you to do that. I think these tools are actually immediately useful.

And of course, I don’t want to repeat what we both said at the beginning, which is that the pace at which these tools are scaling and being released with new features and capabilities is really head-spinning. So any legal organization, lawyer, or person who works with lawyers who still thinks they can maybe sit this one out—like maybe you were able to sit out some of the other technological disruptions that came for other industries—I feel like the jig is up.

And I know I’ve said that before, but December proved to me that nobody in 2025 will be saying, “No, we’re not doing that. It’ll pass.” (Although I did read a Detroit Free Press column this morning where the columnist—whom I like very much, and she writes interesting stuff—said we should all stop using it. Like, “If you’re using it, stop.” That was literally the message.) She wrote that there’s still a lot to figure out about when AI is going to become fully functional on, like, the entire corpus of the law.

Jen Leonard: But hearing those comments makes me sort of revisit the idea that “we’re not going to use this immediately.” Because one thing I was thinking about in December is how lawyers who serve clients will need to develop new skill sets to collaborate with their clients. 

What you just said about the Free Press article made me think of what some of my colleagues—legal writing professors—were saying in the early days of ChatGPT. They were like, “We’re not going to use this.” Well, that ship has sailed.

The exciting part about this time is that we all have so much access to these tools, and the ability to play with them and figure out how they can work for us. And what I think that means for lawyers is: your clients are going to be playing with these tools and using them before they consult with you. I can’t imagine a world a year from now where anyone who has a lawyer doesn’t consult an LLM before they reach out to the lawyer. And so I think lawyers need to be prepared to respond to that, and to be comfortable sort of having this triad now—of the client, the LLM, and the lawyer—and figuring out how to work together, and where you push back on the LLM and where the LLM is helpful.

It makes me think of another podcast episode we did about doctors. Doctors have sort of had version one of this with WebMD.

Jen Leonard: And now they’re gonna deal with a much more powerful version of that. But I don’t think lawyers have had that era of clients coming to them with internet research about what they think the law is. I think that’s about to change.

The way different doctors have handled the “WebMD problem” is always so interesting, just in my own life. I had a wonderful pediatrician when my kids were little who knew I was coming in armed with all kinds of WebMD info. (I have one kid in medical school, and I’m always like, “Well, I am a WebMD. So if you have any questions you want answered, just let me know—I’m a very accomplished WebMD.”) And my pediatrician was so patient about it; he would always make me feel like I had smart questions. Maybe I did, I don’t know—but he was wonderful about it.

I’ve had a totally different experience with another physician. I had one here in the U of M system where I came in with an article from a medical journal (I’d gotten it through the university library) about a weird nerve situation. It was interesting—it was a treatment they were experimenting with in the EU but not here yet. I at least wanted to understand what he thought of it. And he was so dismissive and annoyed that I showed up with these questions and that journal article. I literally never went to him again. 

Meanwhile, I’ve had another doctor with exactly the opposite reaction. She was like, “Gosh, that’s really interesting,” and she went and found other information from other EU journals and figured out how to do something about it.
So it’s interesting. I mean, there will be lawyers on all sides of that equation as well, right? There will be some who welcome the extra help (even if sometimes the extra help isn’t quite right), and who welcome the opportunity to explain why. I think it’ll give them an opportunity to show their client their value-add, right? It’s an opportunity, no matter how good or bad the extra advice is.

Jen Leonard: I think that’s so true. And I just had this experience with my doctor. She’s a great doctor—I love my doctor—and she’s, I would say, relatively young (she’s in her 30s). I’ve noticed a change even in the few years I’ve been visiting her, as we’ve all had more access to information. I was telling her about AI in law, and she said, “I know the medical profession really needs to accept that people just have access to lots and lots of information, and we need to help them sift through what they’re accessing to find the good stuff,” rather than just trying to be a person up on a mountain who declares what the truth is. I thought that was so refreshing—and interesting for lawyers, to the very point that you make. That is our value-add: helping people navigate all of the things that are unfolding around them.

Bridget McCormack: If you see it as an opportunity rather than a threat, you know, it will be an opportunity and not a threat.

Jen Leonard: That’s right. We don’t have time to get into all the applications—though I know we will in episodes ahead—but we are at the beginning of a new year, 2025. So maybe we could wrap up today with some of our predictions about what we envision for the year ahead, in light of all these developments at the end of the year. Bridget, what are your predictions for 2025?

Bridget McCormack: I think we’re going to see more agents. I don’t know exactly what Sam Altman means by “agents coming to work,” but I do think all of the frontier companies are going to have agents that people get comfortable with, whether in our personal lives or our professional lives (and probably both). By the end of the year, we won’t be imagining what AI agents are—we’ll just be working with them. Again, I don’t know exactly what that looks like, but I think that’s likely to happen.

I also think that we’re gonna figure out how to—well, I don’t know exactly what this will look like either—but there’s gonna be more integration of our generative AI tools with the rest of our lives, both our work lives and our personal lives. I haven’t fully figured out how (“Apple’s Intelligence”) and ChatGPT are gonna make my iPhone better, but I know it will (or I’m told it does). That’s still on my to-do list this week: to figure out all of that.

I’m in a Microsoft shop at work and a mixed shop at home, so figuring out how to integrate AI across all the other things that I do regularly. I am very, very hopeful (and I predict) that it will be better by the end of the year. But what are some of your predictions?

Jen Leonard: I agree with all of yours. And I would add: we’ve talked about this before (and we talked about it a bit on this podcast) that the liberation of legal information to the general public will start to grow. There already are various projects to get legal information—the corpus of legal knowledge—into the hands of the public. And as those projects unfold and grow, they will be all over the internet.

As companies like Google get their bearings and figure out things like Deep Research, more and more people will be able to access all of the legal information that we have. And I know both you and I are excited about the possibilities for regular people to be able to get answers to their legal problems. Then, I think—to the earlier point—lawyers who serve private clients will have to figure out where their value-add is and how they complement (rather than act as) the sole provider of counsel and advice. Those are some of my predictions, and I’m really optimistic about them.

Bridget McCormack: I agree, and I’m also optimistic. I hope very much that you’re right about the law becoming free, because it’s about time. I want the law to be free.

Jen Leonard: And that is the note on which to end this hefty episode about one month in December! Hopefully this was helpful to people. Even if you lose track of the details (as I frequently do), it’s useful just to know that the year ahead really promises to be active. So park your skepticism, get on board, and start playing around with these tools.
We’ll see you on the next episode of 2030 Vision: AI and the Future of Law. Thanks so much, Bridget.