How will AI Tools Like ChatGPT 4.0 and Notebook LM Shape the Future of Law?

Summary

In this episode of 2030 Vision, Jen Leonard and Bridget McCormack discuss the latest advancements at the intersection of AI and the legal profession. They break down key concepts like red teaming to enhance AI security and the human-in-the-loop approach that ensures AI serves as a collaborative tool rather than a replacement. 

The conversation explores two transformative AI tools—Google Notebook LM and ChatGPT 4.0 with Canvas—and their potential to reshape the future of law. These technologies promise to revolutionize legal research, streamline productivity, and make legal services more accessible. This episode provides valuable insights into how AI applications can improve judicial decision-making and foster collaboration between law firms and courts, advancing civil justice.

Key Takeaways

  • Red teaming helps secure AI applications.
  • Human-in-the-loop ensures effective AI-human collaboration.
  • AI can generate engaging podcasts from documents.
  • Google Notebook LM acts as a powerful legal research assistant.
  • ChatGPT 4.0 with Canvas provides an editable workspace for legal collaboration.
  • AI tools boost productivity and democratize legal services.
  • AI’s role in speeding up judicial processes is shaping the future of law.

Transcript

Jen Leonard: Hi everybody, and welcome to the next episode of 2030 Vision: AI and the Future of Law. I am your co host, Jen Leonard, founder of Creative Lawyers, and I'm thrilled to be joined as always by the fabulous Bridget McCormack, president and CEO of the American Arbitration Association.

Bridget and I co host these conversations to help update and upskill and hopefully educate a little bit about some of the emerging technologies that we're seeing in our landscapes and what their implications will be for the future of law—and how lawyers and legal professionals can start thinking about them, playing with them, experimenting with them today.

So on every episode, we have three different segments. We start by sharing a couple of definitions that might be new to listeners and what they mean in the context of artificial intelligence. We then share our newly rebranded AI Aha!, which is the moment that each of us has had since our last recording that felt magical when we interacted with artificial intelligence in some way. And then we dive into our main topic for the day.

Today we are going to actually be very, very explicitly focused on the tech and share two new releases—one from Google and one from OpenAI—that we find to be particularly interesting and have huge implications, we think, for the future of legal.

So with that, we'll kick it off with our definitions. And I'm gonna turn it to Bridget to share our first definition, which is “red teaming”. So Bridget, can you tell us what “red teaming” is?

Definitions: Red Teaming

Bridget McCormack: Yeah, I love this term because it sounds maybe even cooler than it actually is. Red teaming just sounds like a cool thing to know what that is and what it means. But red teaming is just a technique that's used to test the security of AI models by simulating attacks. I believe it's usually the kind of last phase that a tech company goes through before they release a model.

And they want to identify any weaknesses or flaws before they release it into the world. They want to do it in this controlled environment, usually together with developers, so that they catch any big security issues before it's too late. What about “human in the loop”? I feel like I hear this more and more.

Jen Leonard: So human in the loop is pretty much exactly what it sounds like and I think should give most lawyers a little bit of peace. The human in the loop concept is the idea that we're not going to just hand over to artificial intelligence the ability to do all of the things that we do as professionals, as citizens of the world, but instead collaborate with artificial intelligence and make sure that we keep contributing our human input all along the way—from training and design and development through testing the outputs and revising and refining over time. So a strong AI policy and approach will always include a human in the loop. 

Bridget McCormack: Yeah, I do think so. And I do think it's also a helpful concept for lots of people who are just getting comfortable with what these generative AI systems can do. I think it's an important one for everyone to understand.

AI Aha! Moments: ChatGPT’s Latest Features for Legal Professionals

Bridget McCormack: So our second segment, the newly rebranded AI Aha! Moment, where we get to talk about our own personal times this past couple of weeks when we've been surprised or delighted by something the technology can do. And you have an interesting one this week, Jen. What's your AI Aha! Moment?

Jen Leonard: My AI Aha! might not be that interesting in terms of what I did with the AI—because it's not that original—but the mode through which I did it was interesting to me.

So if people have not yet tried OpenAI’s GPT-4.0 voice mode on the ChatGPT app, it's really incredible. You can select the voice at the outset that most resonates with you, and then you can just speak into your phone. It very much feels like Her like the Joaquin Phoenix movie. It really feels like you're interacting with a human being.

And the thing that makes OpenAI’s voice mode model different from—and better than—some of the other models you might experience, like Google Gemini's voice model or something like Siri on your phone, is that it's not taking your voice and then on its own end translating the voice into text and then generating a text response and converting that response into voice.

It's actually just directly understanding your voice without an intermediary translation into text. It might sound like a very technical distinction, but it's a huge leap in the technology to be able to move from text-to-text conversation to oral voice-to-oral voice conversation. So it's very seamless and very powerful.

So I used it for—not a very powerful use case—but I was at my son’s Little League game, which is right next door to the grocery store and we didn’t have dinner yet. And so I was sitting next to my husband, I turned on the voice mode and I said, “I want to create a pretty easy-to-prepare, nutritious meal for my family. It's pretty late in the day and I want something quick and easy. Can you give me five different recipes?”

And it immediately came back with, you know: “Here’s a recipe for shrimp scampi and quesadillas and shish kebabs and shrimp tacos and pasta primavera. Which one would you like?”

Jen Leonard: So I picked pasta primavera. It immediately generated all the ingredients I would need. And then I said, “You know what? I think I’d like a little bit of protein in here, so could you suggest something that would give it a little bit of flavor?” It said, “Absolutely. Here's a revised recipe with some chicken sausage. And that will add a little bit of flavor. And here's how you can prepare it.”

And then at the end I said, “That’s great. So can you generate a new ingredient list that just contains the ingredients from everything that we’ve talked about so far?” And it absolutely generated the list. I took it to the grocery store. It was perfect. Seamless.

But it also made me think about the future of AI as like—wow—as we develop more agents and more AI that can act on our behalf without us having to sort of do the things ourselves, I could imagine if you’re a grocery vendor, one day offering the option of just like, “Great, can you just put in that order for me and I’ll be right over to pick it up?”

So that was pretty magical to me, both because of the voice mode interface and because it helped me in a real way. The dinner was delicious by the way—which had nothing to do with me. It was an easy recipe. But I thought it was very, very cool and showed the possibility of some opportunities in the future to make our lives a little easier.

So what was your AI Aha! Moment, Bridget?

Bridget McCormack: Mine is one we’re going to spend a lot of time talking about in a minute, so I’ll do a quicker version of it. But I started using GPT Canvas last week and I’m really kind of blown away by what it does.

So my one example for my Aha! Moment: I was presenting to the Retailers Association on Friday morning on AI, and I had an idea of what I was probably going to talk about. But I wanted to think a little bit more about the kinds of concerns businesses might have—or their law firm, their outside counsel would have—about AI-enabled online dispute resolution processes upstream.

And so I asked it to start drafting me a thoughtful presentation about what folks would be worried about and what were some of the responses to those worries. And we’ll talk more about how it does all this in our main segment later. But after I went through the process—which again, is stunning—I asked it to put it in the voice of Bridget McCormack.

Bridget McCormack: And there's enough of me out there on the internet, right? Because I've done a lot of writing—both traditional opinions and law review articles and then popular articles—and a lot of speaking. I think my voice is out there. And so it said, you know, like it does all amiably and kindly, it was like, “Yeah, sure, I can put it in the voice of Bridget McCormack.”

And the quote—I think I have it here somewhere—it’s like, “I've changed it to an empathic leader’s patient presentation with an audience of people who are curious but not yet sold.” And it said a bunch of other things. I can’t remember exactly what they were, but... which sounded like it definitely knew how I talk. And I found it really, really stunning.

So anyway, we can talk more about the details of it when we discuss the technology more generally—or this particular technology more generally—but it's already, again, changed my habits that quickly, which is stunning. Have you used it yet?

Jen Leonard: Yeah, I love it. For the audience too—I think for lawyers—they're really going to enjoy using it for reasons that we’ll talk about in our main segment. But it feels different to me. It feels like sort of a step up—not even necessarily in the technology—but in the user engagement and the facility with which you can sort of adapt your content. I love it.

Did you agree with the way that it described your tone?

Bridget McCormack: Frankly? I think it gave me more credit than I deserve. But I really liked the way it thinks that I speak. So I’m fine with that. Yeah, it was great.

Jen Leonard: Two really cool moments, I think. It feels to me like the cool moments just keep coming. So next episode, I’m sure we’ll have two even more inspiring and magical moments.

Main Topic: Google Notebook LM: Your New AI Research Partner

Jen Leonard: And some of those moments might generate from the developments that we’ll talk about in our main topic today, which... sometimes in our main topic, we talk about topics that are explicitly related to the legal profession. But we really wanted to focus today on two more broadly available technologies that may be new to listeners, and think about what the implications might be for the profession—even though in their current form, they’re publicly available.

I’m not sure whether any legal departments or law firms are using adapted versions yet in-house, but we’ll talk about our experience with the two of them as we can access them online now.

So the two technologies that we’ll talk about are Google Notebook LM and ChatGPT-4.0 with Canvas, which Bridget just started to talk about in her AI Aha! Moment.

I’ll start, Bridget, by sort of describing Google Notebook LM—what it is and how you can start using it today—and then we’ll jump into the second offering.

So Google Notebook LM is really designed to serve as your virtual research assistant. It can help you synthesize information from multiple sources really quickly and efficiently. It uses a feature called source grounding, which I actually had not heard before. I hadn’t heard this terminology before. It sounds to me sort of similar to retrieval augmented generation, which we’ve talked about in other contexts.

But source grounding is the idea that you as the user will create—let me back up. So you log in, you create what’s called a notebook, which is sort of like a folder file, right? And then you use this source grounding feature to upload document types that you want to use as the basis of the information that you engage with.

So these document types right now can be anything from Google Docs—it links directly with Google Drive if you’re working in a Google workspace. It can be PDFs that you upload from your desktop. It can be text files, Google Slides. You can upload audio files—MP3s. At least the last time I tested it, you could not upload MP4 video files yet, but I believe you can now copy and paste YouTube URLs. And my understanding is it’s not watching the YouTube videos, it’s using the transcripts that are generated underneath the videos as a text interface.

So once you upload whatever your content is into the notebook, that becomes the knowledge base for the AI to engage with you. And so you can do so many different things with it. You can ask it to summarize all of the content that you’ve uploaded. You can ask it to generate FAQs—if you’re running a website and you’re taking all the information from your website and generating FAQs to populate the website.

You can have an intelligent Q&A with it and ask it questions about the documents. It can help you brainstorm new ideas. It also has this really cool advanced citation feature that I think is helpful in a world where a lot of the models currently generate results, but we’re not really sure where they’re pulling the individual facts or assertions. This Notebook LM actually has a citation feature, so you can go in and click and check on the source documents, make sure it’s accurate.

It currently supports 38 different languages. So this will help us with everything from trying to digest content in languages that are different from the language that we can understand and interpret. But I think it also opens up lots of possibilities for having more interconnected conversations with people around the globe, which is really exciting.

So it’s an incredibly powerful tool. I think maybe the coolest feature is the podcast creation feature. So you can upload anything—one document—I think the limit now is 50 documents—and then press a button that says “Create a podcast.” And it takes maybe two or three minutes, and at the end of it, you hear two people, like you and me, Bridget, talking back and forth about the content that you’ve uploaded. And it is wild.

To create a podcast out of your own content is a really, really weird experience. And Ethan Mollick—whom we’ve referenced many times, is a leading expert on AI—has made the point that Notebook LM has actually been available for a year now, but they just added this podcast feature. And that created the magic that allowed people to get really, really excited about it and connect with it and be sort of fascinated by it in a way that the text-only version didn’t accomplish.

So he makes the point that the way that we encounter artificial intelligence matters, with respect to adoption and uptake and creativity and all of those things. Have you tried Notebook LM?

Bridget McCormack: It’s really stunning. And I’ll tell you what I’ve used it for. So I’ve used it when there’s like a dense document that I have to understand—but try as I might, I just get bored in page two, like over and over again.

So just for one example—no offense, White House—but the White House AI regulatory guidance that now is kind of old, but I still need to understand some pieces of it as I’m thinking about all the things to do better in dispute resolution. So I uploaded that document and used the podcast feature.

And the podcasters are so engaging. They’re so human-sounding. They like step on each other’s words and, you know, laugh and don’t have all the answers. They have questions about the material as they’re going through it, right? As they’re like talking about it. And for me personally, it’s a better way to get information into my brain than just reading a really, really dry, dense document.

I mean, I can go for a walk in fresh air and listen to it. So we’re thinking of all kinds of really interesting use cases. We have so much educational material that you could imagine putting out in this new format for people who want to learn a little bit more about alternative dispute resolution. And we’re working on some really cool things with it. It’s a pretty stunning technology. I’m really enjoying it. Are you using it for work things or personal things?

Jen Leonard: Both. And I’ve had friends in the corporate counsel’s office in law firms tell me that they take publicly available documents—like agency proceedings and minutes from congressional testimony and things that they would have to maybe comb through when they get home from work—and create a podcast that they listen to in the car on the way home. So I think to your point, one thing I’ve been thinking about a lot lately—not only with Notebook LM but just generally—is what does the future of all of the ways that we communicate look like?

I was writing a recommendation letter for a student recently—an alum—and I just thought, will these tools that we use even to apply for clerkships and bar admission and jobs continue to have the value that they have? What value does a cover letter have in a world where any of us can snap our fingers and generate a cover letter that’s compelling and interesting? Are we going to want to continue reading that content, or will we consume it in a different way—like this?

So one personal experience I had with Notebook LM was—I tried to look for places where I really had no idea where to get started and play with the AI. And my son just started fourth grade, and he’s studying Mandarin for the first time. He’s trying to get up to speed because he’s new and his classmates have been studying for about four years. So his teacher sent home a whole bunch of YouTube videos that he could watch.

So we took a lot of the vocabulary lists that he had—beginner vocabulary in Mandarin. So these are things like colors, names of family members, animals—things you would learn at the beginning of learning a language. And we copied and pasted the Mandarin words and their English translation into Notebook LM. And I clicked “Create a podcast.”

And I was just really curious because I knew it had language capabilities. I’ve also heard people say that Google Translate doesn’t always translate accurately. And because I don’t speak Mandarin, I couldn’t say for sure whether it would translate accurately. But I was curious to see what would come out.

And the podcast—like you said—was two people talking, having a really dynamic conversation infused with emotion about elementary-level Mandarin language. And I never told the LM that I was working with a new learner, but I suspect that it understood from the basic level of vocabulary I gave it that this was for a new learner.

And it was an English-based podcast, and they did offer a couple of pronunciations of a few of the words on the list. But more interestingly to me, it didn’t try to provide translation services. It talked about what it’s like to be at the beginning of learning a language, and some of the things beyond just memorization of vocabulary that new language learners should appreciate—like the beauty and the design and the nuance of different languages as compared with other languages.

And that sort of blew my mind. That it took that next leap of—these are really basic words, so maybe this is somebody just starting the language, and maybe we can get them excited about it. Which both my son and I were excited about, for different reasons. I just found the technological experience to be interesting. And he found it to be a really accessible way to get excited about learning a new language. So I thought that was really interesting—and something I hadn’t really considered before.

Bridget McCormack: That’s amazing. I was talking about it recently with two friends who are judges, and they’re preparing for oral arguments and you have so much reading. And it’s all publicly available, right? It’s all briefing that’s available on the website. There’s no confidential information.

And they’re in their cars a lot. And I suggested it as—why don’t you load all the briefs in a particular case into Notebook LM and listen to other people talking about the case? Obviously, it’s not a substitute. They have to read the briefs, and they will read the briefs. There’s no shortcut. But it might be a different way to engage with the issues in advance of preparing to talk to the lawyers about the case.

So it’s fascinating. It’s really going to be another game-changer. What do you think the other implications are for legal—other than another way for lawyers to process complicated information? Do you see other implications for the legal profession with this technology?

Jen Leonard: Yeah. It’s funny—I go to a monthly dinner of lawyers. Women lawyers. We get together once a month. And I was telling them about—because I’m annoying and I come to every dinner and talk about AI—it’s like the informal AI roundtable. But I was telling them about Notebook LM, and my one friend immediately said, “Oh my gosh, if I had had that in Civil Procedure and could have uploaded, like, Erie or International Shoe, and had a podcast explain to me what those cases were about…”

And then I was thinking, you know, uploading one case and having it talk to you about the case would be interesting. But if you were able to upload a whole unit of Civ Pro or torts or contracts and have the podcast sort of compare and contrast the different approaches that courts were taking in different factual circumstances—I think that would be really interesting.

And I think people learn in all different ways. I didn’t particularly relish going through and briefing case after case after case. This creates a little bit more of a dynamic learning environment. Other use cases that you see in legal?

Bridget McCormack: It’s going to be a game-changer for complicated records. If you’re a lawyer or an arbitrator or a mediator or a judge, and you want to get everybody on the same page about a complicated record... I mean, I’ve only just started playing around with the back-and-forth Q&A you can do with Notebook LM. It’s such a good research assistant. It will put together a timeline and focus on, you know, specific places where there’s a lack of clarity.

And maybe a better tool for like a team of people that are working on the same complicated transaction or litigation—it’s just like adding a great additional super smart, really nice paralegal. It just could be another member of the team in a way that I think is going to be really effective for lawyers who use it.

Jen Leonard: Also thinking—this is the first time I’ve seen this create-a-podcast feature, which always means that it’s the worst version of whatever we’re going to see. And right now, the way that you engage with it is you just click a button—you don’t give it any direction, like “We want you to explore this angle” or “Answer these questions.”

But I could see in the years ahead—maybe the months ahead at this rate—being able to direct the podcast hosts to focus on different elements of a case or set of cases.

I also think, you know, for the general public, there are huge opportunities to sort of leapfrog and make huge advances in natural language processing of documents that they get from a court—of trying to read judicial opinions or court orders and try and figure out exactly what’s actually happening with their case.

If you have a Notebook LM file and you can upload everything that you’ve had in a court case that you’re involved with and then ask it questions of the documents... I see that as being like a really exciting thing.

And you think about ten years ago—trailblazers who were working in natural language processing and how manual the process was of trying to figure out exactly how people speak and how to direct them based on different words. I think we’ll look back and it will seem so laborious as compared with what we’re about to be able to do with it.

Bridget McCormack: Yeah, it’s gonna be fun to watch.

Main Topic: How ChatGPT 4.0’s Canvas Transforms Legal Collaboration

Jen Leonard: So that is Google’s Notebook LM. And now we’re going to talk about our second really cool new technology that we’ve been playing around with. And I’m going to turn it to you, Bridget, to tell us a little bit about ChatGPT 4.0 with Canvas—a name that really rolls off the tongue, as all of OpenAI’s products do. What is this new Canvas upgrade?

Bridget McCormack: Yeah, as you know from my AI Aha! Moment, I’m very taken with this new offering from OpenAI. And it came with no fanfare. Like, I don’t know how I first learned that it was available on your, you know—if you’re in your GPT account, you can choose between the available models, assuming you’re paying for 4.0. And so it just kind of showed up—I don’t know—a couple of weeks ago as an option. I’ve heard some people say that it was an answer to one of the Claude products “Artifacts”.

But what it is is basically an editable workspace that goes beyond the normal interaction you have with GPT-4.0 when you’re on the site. And it gives you sort of a new window where you’re co-creating the… the Word document, the speech, the whatever it is that you’ve asked ChatGPT to work on with you.

So you still have your Q&A panel kind of on the left side of your screen. Then the document it is creating for you and editing with you in the middle of the screen. And then it has these always-available editing functions on the right.

So, for example, you can adjust the reading level of the document—from middle school to high school to graduate school—so you sort of pitch it to the audience that you’re going to be presenting it to. It has an option of specific edits throughout the document. And it will literally highlight a particular sentence and say, “This is a broad claim, and it might be better if you could back it up a little bit with some examples of what it means.”

When I first saw that, I was like, “Well wait a minute. You put the broad claim in there. Why don’t you just do better the first time?” But what it’s doing is—it actually just wants to make sure you’re making the choice, right? So it gives you these like choices throughout the document. If you like its suggestion, you give it a thumbs up and it makes the change, right there in real time.

So you’re just literally kind of editing the document with this really helpful writer on your right. And then you can also, in your query, give it other instructions and questions. Like I said, “Please never, ever, ever use the word ‘delve’ in anything we’re working on together. Don’t use the word ‘delve.’” And same with the word “grapple.” Like, I have words that I feel very strongly about it never using. It was like, “Yes, no problem,” and it like took ‘grapple’ out.

And then, as I said in my AI Aha! Moment, at the very end of this one piece I was working on, I said, “Now will you put it in the voice of Bridget McCormack?” And then it—stunningly—translated it yet again. It has an option for putting final polish on it. In all the ways you normally work with ChatGPT, you can make it punchier or less punchy. You can make it more formal, less formal. But the specific inline feedback is what’s pretty incredible.

It also has version control functions, so you can track your changes and you can revert to previous versions of all or part of it, so that you don’t lose track of something that maybe ends up not where you wanted it to end up in the editing process. You’re like, “Actually, I think I liked it at the beginning,” which happens to me all the time in real life. So it’s really nice to have somebody else helping me remember where it was that I really liked it.

And then it’s also been trained to understand context a lot better—the 4.0 model has been. So it really makes excellent suggestions. They’re not just random suggestions—they’re pretty responsive to your input. It’s like a true partner in coming up with a document together. I didn’t know that I needed a new way of doing word processing, but I now know I needed a new way of doing word processing. And it’s so much better. Have you been using it much? Or a little bit?

Jen Leonard: I have. And it’s funny—you said you didn’t know that you needed a new way to word process. And I didn’t know how spoiled I was getting around AI, which has been, you know, so awe-inspiring. And then you quickly get over it, and you start pointing out the flaws with it—the things you want to make it better.

And one of the things that now frustrates me is the exact way that I’ve been engaging with AI all along, which is: give it a prompt and it generates text, and then you want it to do something different—but you have to regenerate the entire answer. At least that’s how ChatGPT worked before.

And what was amazing to me was—I can’t even remember what I asked it to generate in Canvas—but say I asked it, you know, “Draft me an intro summary to precede a proposal for X services.” And maybe it would draft five paragraphs. And in the old ChatGPT, if I didn’t like paragraph two, and I made the leap to edit the whole thing, it would change everything—even if I liked one, three, four, and five.

But now what’s cool is, like you said, you can hover over that second paragraph—and I imagine the way that they designed this on their end was: you’re now confining the regeneration of text just to that second paragraph. So everything else is going to stay the same, and then you’re going to have the ability to have more control and just zoom in on this one paragraph and make whatever adjustments you want to make.

And one—it’s just a little bit less disconcerting. You don’t start regenerating the whole thing again, which you don’t always want to do.

But like I said, I think for lawyers, it’s like this element of control. Like, “I just want to zoom in right here and change this one little thing.” And then I want to go up here and I want to tweak this one little thing. And so—even though it’s still a probabilistic generated output, which we’ve talked about before, is more of a guess as to the next word in a sequence—it feels a little bit closer to deterministic output because you have that control element.

So like you said, it starts to feel like a true writing partner. And you feel like the driver a little bit more. Is that sort of consistent with your experience?

Bridget McCormack: Yeah, absolutely. I mean, I think if lawyers who have not yet taken the plunge use this particular product first, the user interface is so compelling that it’s going to be a real gateway drug to more AI use by lawyers. It really is. It’s just got such a great user interface. I think it’s going to accelerate uptake.

I’m also excited about the ways that it might help legal aid organizations or community legal services organizations that have far more requests for help than they have time or hours or humans to provide that help.

I’ve said this about AI generally, but this is just—in some ways—a more targeted and acute version of it. Because it gives such nice control over outputs that I think strapped legal services agencies and community legal services organizations will have a really terrific new colleague to help get more done in less time. I mean, there’s just no way around it. You can just do a lot more than you could before it was released. It’s that simple.

Jen Leonard: And this is probably putting a very fine point on what you’re already saying—but even in the law firm pro bono context—you and I are big advocates for systemic overhaul of the way that we serve the public through legal services. But it feels like there are even more possibilities for a belt-and-suspenders approach where you can have that systemic overhaul, but also—because the business model for pro bono services in private sector legal work is so different, and you want to be as efficient as possible even if you’re always working on billable hour.

These types of tools start to create an image of the future where you can truly—I know we keep saying like 10X your productivity—but imagine how much more quickly you can do quote-unquote traditional legal work with the ability to partner with these tools to do it in a way that feels comfortable, ethical, and high-quality work product—and how many more people you can serve as a firm or corporate legal department as a result.

Bridget McCormack: Or even better—it’s a really interesting opportunity for a forward-thinking law firm to partner with a community organization and bring this tech to bear on, you know, together handling a much bigger problem than either could probably tackle on its own.

So I think there are some really exciting opportunities for law firm pro bono departments that are in this game. It’s a game-changer.

Jen Leonard: Bridget, on the judicial side, it seems to me like we could really accelerate—both at the trial level and the appellate level—the generation of decisions. Did you have that impression, given all of your work as a Supreme Court justice?

Bridget McCormack: Yeah, I really think so. I mean, as I think I’ve told you before, I was already pretty convinced that the tech was there to do a pretty decent job with at least draft opinions. We’ve talked about Adam Ulatekowski’s work before, but the work he’s shown—just by using Claude—is that these models do a pretty great first draft, especially on legal questions that are maybe not as complicated, might be governed by pretty clear precedent, and not too many issues in the case.

They do a pretty great first draft already. But when you have this set of tools, it feels to me like it’s not only doing a great first draft—it’s maybe a great first, second, and final draft.

It really could—which is frankly an enormous issue for busy state courts. You know—not federal courts. I’ve said it many times: they don’t really have that many cases. I mean, most cases happen in state courts. State appellate courts, intermediate appellate courts just have an unbelievable workload. And they have to get through it.

And so another way to help them do that more efficiently—but also effectively—is going to be really attractive before too long, I suspect.

Jen Leonard: Yeah. And another sort of element of the civil justice crisis we’ve talked about in prior podcast conversations is just our fear about the public’s confidence in court systems to resolve their disputes fairly and efficiently.

And it’s exciting to me—the democratization of these tools. Obviously, corporate legal and law firms will have the opportunity to take advantage of advanced models in their own context. But what gives me hope is maybe the court systems won’t be left behind this time—because we can actually figure out how to leverage the tools that are already publicly available to make people feel better about the outcomes in their cases.

Bridget McCormack: Yeah. Again—another really exciting collaborative opportunity for a law firm and a court to partner. Reach out to me if you’re interested, guys. I got ideas. Lots of ideas.

Jen Leonard: Do it. Do it. Reach out to Bridget and help us save the world and transform all civil justice—because the tools are here. Now it’s about the humans collaborating and coming up with innovative ways to make them work for good.

So anything else you wanted to say, Bridget, before we wrap up this episode?

Bridget McCormack: No, not really. I’m pretty excited about these two new offerings. They make me think, like you said—imagine what’s coming next. It’s very exciting times.

Jen Leonard: Very exciting. And we will be back on our next episode to share whatever our AI Aha! Moments are that will unfold in the next couple of weeks in our lives.

In the meantime—we encourage you (have no affiliation with any of these companies!)—but we encourage you to experiment with Notebook LM, with GPT-4.0, with Canvas, and just start experimenting with things in your life that are difficult to solve now, but may be very, very seamless and easy to solve in the future.