AI in Legal Research: The Battle Over Copyright and Innovation

Summary

In this episode of 2030 Vision: AI and the Future of Law, Bridget McCormack and Jen Leonard break down two major AI stories shaking up the legal industry: Elon Musk’s unsolicited $97.4 billion bid to buy OpenAI and the landmark Thomson Reuters v. Ross AI case, which sets a critical precedent for AI training data and copyright law.

They also explore OpenAI’s new Deep Research tool, which is revolutionizing how legal professionals conduct research. As AI-powered tools become increasingly sophisticated, law firms face urgent questions: How will AI impact legal research? Will legal paywalls survive? And is the legal industry moving too slowly to adapt?

From the growing market for AI training data licensing to AI’s potential role in democratizing legal services, this episode examines how the future of law is being rewritten in real time.

Key Discussion Points

  • AI Aha! Moment: Why even the free version of AI tools is disrupting legal research.
  • Elon Musk vs. OpenAI: A $97B bid, a legal battle, and what’s really at stake.
  • Thomson Reuters v. Ross AI: The case that could shape AI copyright law for decades.
  • AI Training Data & Copyright: Can AI companies train on proprietary legal headnotes?
  • Legal Paywalls & AI Research: Will legal information stay locked, or is a shift coming?
  • Deep Research & the Future of Legal Tech: How OpenAI’s new tool is changing research and strategy.
  • The Divide in Legal AI Adoption: Why some law firms are embracing AI while others hesitate.
  • The Bigger Picture: What this means for legal education, innovation, and access to justice.

Transcript

Jen Leonard: Hi, everyone, and welcome to the newest episode of 2030 Vision, AI and the Future of Law. I am your co-host, Jen Leonard, founder of Creative Lawyers, joined as always—luckily—by the brilliant Bridget McCormick, president and CEO of the American Arbitration Association. Good morning, Bridget.

Bridget McCormack: Good morning. You look to be in a hotel room by the curtains behind your head.

Jen Leonard: Yes, this is not my actual bedroom. I am in Louisville, Kentucky today doing a presentation about generative AI, our favorite topic. And it is earlier than normal, so I am caffeinating and might be a little bit quieter than normal—or chattier, not sure. But you look like you're at home.

Bridget McCormack: I am. I did a presentation yesterday about generative AI, and it didn't start until 4:30 or 5 or something. And so I was on the last flight out. I Uber’d to the wrong airport. I literally thought I was flying out of Newark because I had flown into Newark, but I was flying out of LaGuardia. Like, I'm on planes so much that I... And luckily, we had just gotten through the tunnel and the driver said, "Terminal?" And I said, "It's Terminal C." And then I thought, Newark? I've never had Terminal C for Delta at Newark. And finally I realized, oh, God, I'm supposed to be at LaGuardia. I was like, I'm so sorry, sir.

And we turned around and went back through the tunnel and back to the other side of Manhattan. I just barely made it. So I'm very happy to be home, but I got home late and it's early. So we're going to do our best today. But there's lots to talk about as always.

Jen Leonard: It is, and I will just say we have so many things in common, including that I have done that before. I did this last summer. I was in Florida and I just assumed I was flying home out of the Miami International Airport, so that's where I went. I insisted to the person at the American Airlines counter that I was in the right place, and she very rightly insisted I should have been in West Palm Beach. So then I took the scariest, fastest Uber ride of my life from Miami to West Palm Beach. So I have done that. It is a terrifying feeling.

Well, I'm glad you are home in Michigan and it's lovely to see you. And today we are going to talk about Deep Research, something that you and I are using through OpenAI—their new Deep Research product. So we're going to talk about that. But before we dive into that, we will do our regular segments, which are our AI Aha! segments (Thanks to Megan McMillan at Cleary for naming this segment) and our new What Just Happened? segment catching people up on a couple of big pieces of AI news. And then we'll dive into our main topic for today on Deep Research. 

AI Aha! Moments: The AI Divide — Are Some Lawyers Getting Left Behind?

Jen Leonard: So what is your AI aha moment for this episode, Bridget?

Bridget McCormack: I had a hard time choosing one this week, but I'm going to tell you about a panel I was on last Friday at a construction conference—a conference of very senior construction lawyers and arbitrators. The three other panelists were lawyers and arbitrators. They kind of pulled me in because the panel was about AI. And they did a pretty interesting thing. They took a mock construction dispute file that they had put together for a law school exercise and fed it into just the free version of ChatGPT, all in advance of the conference. 

Then they asked everyone coming to the conference to answer the same eight questions that they had asked the free version of ChatGPT to answer. They did a comparison of where the free version of ChatGPT basically got it right. It went right to the final use case in dispute resolution and said, "Decide this dispute, free ChatGPT, and let's see how you do." And it was shockingly good. Just the free version! I think their data showed it agreed with these senior experts in the field on six out of eight of the specific questions they asked it to decide. And on the other two, the questions were really nuanced and the group was divided. It reminded me of Adam Unikowsky's experiments with Supreme Court decisions on the close questions. It didn't come out the way the majority of participants expected, but everyone agreed that was debatable. When I first saw the results of the experiment, I asked the other panelists, "Shouldn't we try running this through the paid models and some of the reasoning models? We're probably going to get different results." They did end up running it through at least Claude 3 Sonnet.

But I think I was wrong, because by doing it in the free model and showing just how good it was, this entire room of senior lawyers—many of whom hadn't really been using it regularly yet—were pretty moved by the fact that just the free version could do such a good job. 

You know, I always tell people who are using the free version that you have to use the paid version. And literally since then, I've now been telling people, or just use the free version! It is stunning what even the free version can do. I do think until you actually use the technology, it's hard to appreciate how impactful it's going to be and how it's a general-purpose technology that's going to change our lives. So the "Aha!" was not the technology doing something delightfully new for me, but the idea that going back to basics—even the basic model—can really open people's eyes in a stunning way. I don't know, maybe that's not "Aha!" enough, but it was stunning, I thought.

Jen Leonard: Well, I think it's like two levels of Aha!, right? You're identifying two different ways to use it in that context. One is for demonstrative purposes—just helping people see how powerful it can be. For that purpose, it sounds like the free version improved enough over time from user engagement, because it definitely was not there a couple years ago.

Bridget McCormack: Yeah, it must have improved significantly, because it really was impressive. The outputs were impressive. It makes sense that it's improved. I don't ever use the free model anymore, so I'm not in touch with that, but it was pretty impressive that they ran this experiment and showed the results.

Jen Leonard: The second level, I guess to your other question, is if you were actually going to deploy it in a real-world context, you could use the advanced reasoning models and probably get even better results.

Bridget McCormack: Yeah, that was kind of my value-add on the panel—explaining to everybody what the reasoning models do differently and how, if you thought this was impressive, get a subscription and run it through the reasoning models to see what happens next. It was a really interesting conversation. 

I'm seeing more and more openness from lawyers, at least in the construction bar. The construction bar, it turns out, is pretty at the front end of a lot of technology changes, I think because there's so much technology in the underlying construction business. So the bar seems curious and interested, which is fun. How about you? Did you have an AI moment this week?

Jen Leonard: Well, first of all, we did not catch up on football news—my Birds won the Super Bowl! And I will say I used AI to develop a menu for our neighborhood brunch that we hosted. Not exactly an AI Aha!, but it was an Aha! in the sense that I decided to host this brunch for 25 people and all their kids, like, the night before—and then used ChatGPT to create French toast sticks and egg casseroles and all of these things that were really easy to make. We've talked about that before, but that was delightful. I just said that to say: Go Birds!

But my AI Aha! was more of a realization. I feel like I have crossed a threshold with my engagement with AI—I increasingly feel like I live in a different world than people who are not using this technology regularly. It's a weird feeling because the faster the technology moves and the more I use it, the more I feel like I literally live in the future. I talk to lots of people in my life who are very thoughtful, including about these topics.

When I probe whether they're actually using the technology or how they're using it, they're not really using it at all. They're interested in it, some people. But when I say, you know, are you using O1 or O3? Have you tried, you know, the new reasoning models? They're not even off of the free version of ChatGPT. And I had a conversation a few weeks ago with someone at lunch in a totally different industry, and their industry is like square one, you know, like

We just want some information about this. And I don't know, I don't do anything now without thinking first, how can AI help me do this smarter, better, faster? And so I don't know if you've had this experience. It just feels like everybody else is 30 years behind now. 

Bridget McCormack: This liminal period feels weird. If we were having this conversation two years from now, I bet once it's in all of our iPhones and in all of our appliances—and again, the robots are bringing me my coffee or Diet Coke or wine—we'll be through it. But right now we're in the middle of it and it is odd. I feel the same way. Yesterday, our chief people officer at the AAA was describing to our board our promotion rate. One of the board members asked a question I had never thought to ask, which was: what's a healthy promotion rate? Where do you want it to be?

You know, he had great answers because this is what he does for a living, but I didn't. So I ran immediately to my friend ChatGPT and got such a thoughtful answer. In a million years, I wouldn't have gone to Google and wanted to sift through a bunch of links to find what different people say about the answer. I wanted it synthesized and organized.

Jen Leonard: We could talk in future episodes about how this will unfold in education too, and how students increasingly—I just don't think they will come to classrooms without having engaged with AI in some way. There was a study out last week: a new survey of employees across different industries found that I think 30% of people are actively using AI at work now for professional purposes.

Whether they're allowed to or not. But I will say I am hearing more conversations in our industry about "secret cyborgs" as a major problem in organizations—people finding out that others are using tech for all sorts of things. I just think that is the future, right? It's very interesting. So we could talk in future episodes about all of that.

What Just Happened: Elon Musk vs. OpenAI — A $97B Disruption or Legal Power Play?

Jen Leonard: But we'll dive into our What Just Happened? segment because AI moves so quickly and people are probably seeing headlines float across their transoms and not having time to focus on whether it matters or not.

The first is Elon Musk made a bid—an unsolicited bid—that maybe you can tell us about, to buy OpenAI.

Bridget McCormack: Yeah, so Elon Musk and a group of other investors made an offer, again unsolicited (as you said), for $97.4 billion. And as you do when you're offering to buy a business for $97.4 billion, I think he made the offer on X (Twitter). Sam Altman right away tweeted back at him and basically said, "No thanks," or like, not interested—but "I'll buy Twitter for $9.7 billion." And Elon Musk tweeted right back at him with one word. I think he said, "Swindler." It was another little public back-and-forth between Elon Musk and Sam Altman. Then the OpenAI board officially turned down the bid to purchase OpenAI for $97.4 billion.

Correct me if I get this wrong, but OpenAI was founded as a nonprofit and its mission is to bring artificial general intelligence to the world. If that's your mission, you succeed whether you (OpenAI) are the ones that bring it to the world or your work plants a thousand other startups and one of them brings it to the world, right? That's the thing about a nonprofit, mission-driven organization: it doesn't have to be us. 

Then in 2018 or 2019, they set up, within the nonprofit governance structure, a public benefit company—a for-profit—so they could raise funds, which they obviously have been doing with stunning success since then (they've raised billions of dollars). And now they're trying to change their nonprofit business (the parent of the for-profit) into a fully for-profit company. There's a process for doing that, and you need regulatory sign-off. Some commentators viewed Musk's unsolicited bid as an attempt to complicate that. It is also the case that Musk has sued OpenAI to stop it from converting to a for-profit business. So commentators view this bid as just another arrow in the quiver—another way to slow OpenAI down and stop their ability to convert to a for-profit.

A nonprofit has to act in the interests of the public it serves. And if you get an offer for $97.4 billion, I think it's fair to say you're supposed to actually think that through and consider whether you have to accept that offer, because it might be in the interest of the public mission you serve—versus your alternative path of changing into a for-profit company. So, I don't know. 

At the same time this week, Elon Musk's AI company, xAI, released its latest model, Grok 3, which is a reasoning model and a multimodal model. I haven't used it myself, but by all accounts it's pretty sophisticated. One of Grok 3's pitches is that it can revolutionize law and justice. It says, send your disputes to Grok 3 and we can resolve them quickly, efficiently, and inexpensively. It's a funny combination of "What Just Happened" moments: Elon Musk's own company seems to be targeting legal, and at the same time Musk has a pretty sophisticated legal strategy to slow OpenAI down. I think this bid was the latest piece of that strategy. I don't know—did you pick up anything else in that back-and-forth that's relevant?

Jen Leonard: No—I mean, I think for lawyers, what a lot of lawyers may not realize is that Elon Musk was a co-founder with Sam Altman of OpenAI. And at one point, before he was sort of ousted by the board, he wanted to convert OpenAI into a for-profit company and tie it to Tesla. That was a point of tension between Elon and Sam at the time. So it's all just part of this messy soap opera that is OpenAI and Sam Altman and Elon Musk. I think this will not be the last time that Elon tries to slow down his competition using the law to complicate things. But as you said, they also created this complicated legal situation for themselves in the first place by creating a very complex nonprofit-overseeing-a-for-profit structure.

What Just Happened: Thomson Reuters v. Ross AI — AI Copyright Wars Begin

Bridget McCormack: Yeah, the other thing that happened this week—and maybe you can tell us about this—is what most people are saying is a pretty significant first decision in AI training data and copyright issues. That was the Thomson Reuters versus ROSS AI decision. You and I have talked about it briefly, and I know you've actually read the opinion by Third Circuit Judge Bibas, who was sitting as a district court judge (we both know Judge Bibas, which made it fun to follow this opinion). You want to give us a high-level overview? Again, this is not a podcast you come to for doctrine—we're talking about change management for lawyers, not legal doctrine—but this is kind of relevant to the whole story for lawyers. So tell us a little bit about Thomson Reuters v. ROSS AI.

Jen Leonard: Yeah, so this was a case originally filed in 2020. And as you mentioned, Judge Bibas is sitting by designation in the District of Delaware. When I saw the headlines on Law.com feeds, I didn't even realize at first because Judge Bibas normally sits on the Third Circuit—but this is at the trial court level. This ruling was significant because he ruled in favor of Thomson Reuters on a summary judgment motion—one he actually invited the parties to file and brief, because he had earlier ruled differently. 

We both loved the opening of the opinion. Judge Bibas says, quote: "A smart man knows when he is right. A wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does, even if it comes late, as it did here." And you had remarked, Bridget, as a former judge yourself, how rare that is.

Bridget McCormack: It's really hard for judges to admit when they've gotten something wrong, but I always feel strongly that it actually grows public confidence in the judiciary when judges can say, like, I'm human, I got this wrong and I'm going to do better. I loved this opening. I loved that he was right up front with, "I think I got this wrong before, and so I'm trying again." I mean, humility is a pretty important quality in a judge, and this was a great example.

Jen Leonard: Definitely. And if you are interested in copyright issues and want to do a deeper dive, the opinion is worth reading. It's stunning how much diligence Judge Bibas—and I'm sure his clerks—put into reviewing thousands and thousands of Westlaw headnotes and comparing them with the massive briefing memos that ROSS had created when building its product at the heart of this case. 

The upshot is that Judge Bibas really got to the heart of fair use in this case (a defense that ROSS had raised). He has another great quote from the opinion: quote: "None of ROSS's possible defenses hold water. I reject them all." This included knocking out ROSS's fair use claim, as well as some other defenses we won't get into. 

The important part was when he walked through the four-prong test for fair use and really focused on factor one—the purpose and character of the use. He found that ROSS's use was commercial (trying to sell a product to lawyers for legal research) and, importantly, non-transformative. They were taking Westlaw headnotes and using them to train an AI model (not a generative AI model, which we'll get into in a minute) to produce a product they would sell to the exact same market Westlaw serves (lawyers and judges). So factor one favored Thomson Reuters. 

Then factor four, which the opinion explains is the most heavily weighted factor in fair use cases—the market effect—weighed heavily against fair use here. The product ROSS was developing was intended as an actual replacement and competitor to Westlaw. So it affected Westlaw's primary market (legal research services). But he also noted that there's a new emerging market for licensing data for AI training. That's a market Thomson Reuters would, in general, develop or license others to develop, and that market came into play in the analysis as well—which is really interesting. So those are my big takeaways from the case in terms of the substantive law.

Bridget McCormack: Yeah, and just to take a step back: ROSS was a startup building a legal research product. ROSS is, by the way, out of business—went out of business because of this litigation, I think (at least that's what I read). But I believe their product was a legal research product, so it was a pretty direct competitor to Thomson Reuters. And the copyrighted piece of the Thomson Reuters product was the headnotes, right? 

This is the important distinction: the legal opinions themselves, when a court issues a legal opinion, that's not copyrightable. You can't copyright the law itself; Westlaw can't just say, "We grabbed it, so we own it now." The law is a public good, so you can't copyright that. But as all of us know who went to law school and practiced law, the headnotes are what Westlaw has added to the law—that is the protected product. The question here was, are the headnotes that Westlaw added to the public good of law copyrightable?

Jen Leonard: That's right. And the creative element is the numerical taxonomy that you referenced—the system lawyers use to develop connections across case law and treatises. Any lawyer who's ever lawyered knows how to use Westlaw headnotes and key numbers to find nuances and add value in that way.

Bridget McCormack: You see all this back-and-forth commentary across media after this opinion about what it means for the frontier model companies that, as we know, are being sued by many businesses who believe their data was used to train those models, right? The New York Times, I think, is the lead plaintiff in one of the biggest cases against OpenAI in particular, for violation of copyright.

Do you think Judge Bibas's opinion predicts an outcome for all of those pending lawsuits? I know we're not going to make legal predictions about how doctrine will go exactly, but how far does this opinion go? What does it tell us about that litigation coming down the pike?

Jen Leonard: I think it will obviously be used by lawyers as persuasive authority if they'd like their cases to come out the same way—and if not, they'll distinguish the facts in their case from the facts here. But it's a district court case; it has no precedential value outside the District of Delaware. The real question will be, what will the appellate circuit courts do? And if there are eventually splits across circuits, what will the Supreme Court do if it takes a case like this? That would really change the landscape nationally. Is that how you're thinking about it? I think outside of legal circles, there's been a tendency to overstate the importance of this case in the broader tech landscape and its precedential value.

Bridget McCormack: Yeah. Judge Bibas was careful to note that this was a pre–generative AI case. What ROSS was doing was before generative AI; it's like traditional AI. And that might make a difference. There are some other things that might make a difference as well, but frontier models that may or may not have trained on Westlaw's products might have different defenses to raise. I mean, I don't know exactly how all of those will play out, but I can imagine how they would differentiate their process from the process ROSS used and their use case, right? The frontier models—I have no idea if they trained on Lexis or Westlaw products (let me be very clear: no idea what they trained on at all; I don't think anybody knows right now). Maybe we'll find out in some of the pending litigation. But if they did train on Lexis or Westlaw products, I could imagine them saying, "We weren't doing that to build a competitor for Lexis or Westlaw. We're just trying to train a general-purpose technology, right? We're building the printing press, the steam engine."

We don't really care about competing with Westlaw. We don't intend to compete with Westlaw. (Although Grok 3 apparently does—because according to its announcement last week, it's ready to go and is going to be faster and cheaper than traditional legal services.) So I think it'll be interesting. We don't yet know how it's all going to play out. But it was interesting that Judge Bibas was really careful to draw that line around what he was doing and what he was not doing in this particular case.

Jen Leonard: Yeah, and there were several points where he was really clear about the nature of this case—the generative AI versus traditional AI point you made, and the direct nature of the competition between the two parties here. I think you're absolutely right that it gets a little more complicated when you're talking about a general-purpose technology that will be used in lots of different ways. It seems like even the tech companies themselves don't know exactly which markets they're trying to serve, which is their big challenge at the moment. It's interesting in terms of what it means for startups. We talked about DeepSeq (the open-source model releasing) and how that caused heads to spin in Silicon Valley around whether you need all of the resources they're trying to gather to be competitive.

As you mentioned, in this case ROSS literally went out of business because of this lawsuit. It creates a roadmap, I think, for Big Tech if they find sympathetic judges who agree with Judge Bibas's reasoning here, to stifle innovation. That, to me, is the inherent tension between protecting original works and promoting innovation. 

Bridget McCormack: In the legal context, it'll be interesting, right? Because the underlying law is not copyrightable, but it is sometimes hard to find. If you just want to find every contract case that the Texas Supreme Court has issued, the easiest place to get it is Westlaw or Lexis, because they've already gathered it, organized it—made it possible for lawyers to sift through.

But it's not the only way, right? So the frontier model companies theoretically—maybe Grok 3 already did this—could go to every single state and ingest all of the legal doctrine that's come out of all of the courts in those states. (State court can be tricky in a non-unified court system; you might literally have to go district by district to get every trial court opinion.) But it certainly could be done if you have the resources. I think the big frontier model companies do have the resources, right? Assuming they can ingest all of the law without the wrappers that Thomson Reuters and Lexis have put on them, they could organize it in a brand new way. I mean, couldn't you imagine?

And maybe now we're starting to shift into our main topic, which is OpenAI's Deep Research product, which you and I both broke down and purchased the Pro model of so we could start using it. We don't regret it, because it's amazing.

But can’t you imagine once they’ve ingested all of the law, that they could organize it in a brand new way? We lawyers have accepted the version that we've been given by a couple of companies. My guess is these frontier model companies could probably produce some new ways to organize doctrine.

Jen Leonard: Absolutely. You know, that's why when I was reading the opinion and some of the commentary around it, it already feels—and this sort of goes to the AI Aha! moment from earlier—courts move slowly, legislation moves slowly. This case was filed in 2020. It's now five years later and we're getting a decision on a summary judgment motion. It made me think about the early days of the massive change that happened in the music industry. When Napster came out, the walls at that point were sort of holding for a while in favor of the incumbents. Eventually, technology, innovation, market demands, consumer preferences really led to the complete deterioration in a lot of ways of the major record labels.

So to me, if I were an executive at Thomson Reuters or one of these large companies that is an incumbent, I don't take that much solace in this ruling. You know, it’s maybe a good temporary win. But like you said, all of these big tech companies are sucking up as much information as they can—the underlying information here. You and I have talked about this in presentations all the time. It's both some of the most trustworthy, well-edited, well-written source material on earth. And in the public domain, the case law is not copyrightable. Once that data is available to these companies, I think we're in a new era in terms of how we engage with legal information and how people understand the law.

Before we do that, there's one super minor point that kind of relates to our main topic.

We have both purchased the Deep Research subscription from OpenAI. I don't regret it for a minute, even though it's $200 a month, which seems like a wild amount to pay. It's amazing the kinds of research that it can do and how helpful it is in a million different ways. But I used it to prep for our podcast today to learn more about Judge Bibas’s opinion.

And it was a minor thing that I noticed: in the generated research report I got, it continually referenced public-facing thought leadership from Davis Wright Tremaine and Skadden.

And I think for law firms, this is the direction they need to be heading from a marketing and business development standpoint. Because if I'm a party potentially who's concerned about these issues, I'm now picking up the phone and calling Skadden or Davis Wright Tremaine to help me navigate all of this.

And Ethan Mollick talks about this—moving from this SEO world to an AI optimization world. And it feels to me like an easy win for law firms to be getting more and more of their thought leadership out there so that it does get sucked into these deep research tools

Bridget McCormick: Yeah, that is interesting that those two firms came up. I mean, I actually haven't really followed either of them in their AI space, but they obviously are thinking about it, I think, if they're finding their way into the Deep Research outputs, right?

Jen Leonard: Yeah, it looked like—at least in the case of Skadden—I know I’ve done panels with representatives from Davis Wright Tremaine, so I know that they are being really intentional about this. Skadden, I don’t know intimately, but I did notice that the resource they were citing in the report is some sort of Skadden AI thought leadership that is directly AI-related. So whoever is thinking about their external marketing and how AI is capturing it is really being explicit that Skadden is focused on AI.

Main Topic: Deep Research & the Future of Law: AI’s Role in Legal Innovation

Jen Leonard: All of that leads us to our main topic, which is Deep Research. We've been using it—for those who haven't used it, Deep Research is a new tool and application available through OpenAI that does exactly what the name says, which is refreshing from OpenAI, which is not historically great at naming things. Deep Research will take your prompt. You can ask, as we did for this, "Tell us about the opinion in Thomson Reuters v. Ross."

It will then respond to you with a series of questions—the kind you'd get from a very smart junior associate, I imagine. You know, questions like: Do you want me to look in the legal industry only, the tech industry, or both? Are you looking for thought leaders' responses or only other courts? It asks you a whole bunch of questions. You give it the answers, and then it goes off for a few minutes and does deep research and presents you with a report.

The report is not just a summary (which is more like Google's “DeepResearch”), but really goes deeper into implications and nuanced perspectives. I just find it absolutely amazing.

Bridget McCormack: Yeah, it's a stunning product. I mean, like you, I don't regret it at all. It's really pretty incredible to have that kind of help right at your fingertips. It's very smart, it's very thorough, it's very efficient. And it does feel like the next step in forcing lawyers and legal businesses to think differently about how their work is valuable—which I absolutely believe it is. I believe the human piece of what we do is critical. (I think you and I were in the same conversation with a partner at a big law firm—we don't need to be specific—who used it and produced what he said was an A-minus level brief. So if you can produce A-minus-level work in minutes rather than what might take an associate a week or two...)

We're just going to have to think differently about what practicing law means, right? Or we can get excited about what it means, because we can probably do a whole lot more—and a whole lot more of what we like doing. But it definitely feels to me like the next reminder to lawyers that you really have to get serious about what this means for what we do. I don't know... other takeaways that you have?

Jen Leonard: We talked about this. When I ran the Deep Research report on Judge Bibas's opinion, my first thought was: if I think back to my days as a junior associate in a law firm, if a partner had asked me to pull that opinion, read it, interpret it, figure out what the implications were for our clients and not bill too much time for it, I remember those days. I remember being brand new to understanding anything about the law, just trying to figure out how to read things well and capture all the nuance. This produced a result that I could never have produced as a junior associate. And even if I could, it would have taken me weeks of billable time to come up with something like this. So for the business model, I just think we are not that far away (as you said—we'll talk now, I guess, about that paywall issue). Once that unlock happens—And I think it will happen, for the reasons we talked about earlier: it's not copyrightable information, it just requires somebody to gather it. We're in a totally different world then.

Bridget McCormack: Yeah, the paywall has been what everyone thought was maybe the barrier to this technology having a fast impact in legal, but I'm not confident it hasn't already been overcome. I mean, we don't know, because it's not like these private companies have to report what they've scraped or where they've gone and ingested data. And again, let's be very clear: you're not doing anything wrong by ingesting every legal opinion—as it should be, frankly, right? The law should be free and public. (It should also, frankly, be understandable to the people who are governed by it, but that might be asking a lot.) 

Maybe the frontier model companies have already ingested all of the law, and the paywall…If you experiment with Deep Research and ask it legal questions, you might come to that conclusion, I have to say. I don't know, but when you see the results, you might think it has. So paywalls, I think, will not serve as moats for long—in our industry or any other. Zach Abramowitz (who I do another podcast with) said “It's pace of innovation over moats with this technology. Moats are useless. And paywalls are moats.” I think he's right about that.

Jen Leonard: Or as Ethan Mollick says, speed is a moat, which is essentially the same point.

Bridget McCormack: Yeah, same idea. Ethan Mollick was the one who first started saying, like, assume paywalls are not an issue, in five minutes. And it might be five minutes ago at this point—I don't know where we are on whether paywalls are doing anything anymore. What's your thought about that?
Jen Leonard: Maybe this comparison is helpful: when I was running DeepResearch for this podcast, I went back and forth with the model and asked it to pull the actual opinion—which, again, is problematic for Google, because before, I would have gone to Google to find it. And it immediately produced the link to the District of Delaware's website, where I could just click on a PDF, which I then uploaded to Google’s NotebookLM and turned into a podcast to listen to.

But I think to your point, the challenge has been scraping state court data—getting all of this disconnected information in one place. Because it's out there publicly and it's not copyrightable, the model will go get it for me if I ask—which means it has access to anywhere on the web where this information is publicly available. By comparison, I'm working on a different project where we were trying to get articles from Inside Higher Ed (the paywalled publication for higher-ed administration). I ran a DeepResearch report on what Inside Higher Ed wrote last year about generative AI in higher education, and it produced a summary of 20 different articles. Then I asked it, "Isn't this a paywalled resource? How are you able to actually give me summaries of these? Because I don't have a subscription to it, and I don't know that you do either."

And it said, basically, "I cannot access the underlying information beyond the five free articles that you get. So I'm basing the summaries on other public commentary about the underlying articles." So I think the point you're making is the really important one for lawyers: our information really isn't behind a paywall. The way I had been thinking about a paywall was the paywall we use to access Westlaw and Lexis, but the Bibas opinion and, as far as we know, the underlying information is not paywalled at all. It's not copyrightable at all. So I think it's a bigger opportunity and challenge in legal for that reason.

Bridget McCormack: Yeah. And the product obviously wasn't announced as, you know, "We're releasing DeepResearch, which is basically an associate at a law firm," because in fact it's also a consultant at a consulting firm and a junior partner in whatever field. But it really is—legal work is what I've spent my life doing, and the things that it does, like you said, are so similar to what you might ask an associate to do: researching and making connections across different sources of law and synthesizing places where there's conflict or not. It feels like it's doing the kind of work that you need lawyers to do, right? 

So I don't know... It reminds me a little bit of that JAMA study we talked about—I don't know, it feels like 18 years ago, but it was probably in December. It's interesting that medical researchers are willing to disrupt themselves this way. They do a study and show that the frontier models can do better diagnostic work than human doctors—and in fact can do better diagnostic work than human doctors using the generative models. (Because the human doctors always override the generative model's suggestions.) It feels like that kind of moment for lawyers. 

If we can see now that this is going to do a lot of the things we thought we were meant to be doing, let's consider the upside potential of having this fully available partner to do some of the tedious things that have to be done—so we can exercise the judgment and help people solve problems in ways I still think humans can do better than technology. But I don't know. It does feel very similar to that JAMA moment. Like, uh-oh...we're gonna have to focus on the things that we really do well.

Jen Leonard: Yeah, and I'll just close with my final thought. We do these presentations (together and separately) to law firm leaders about what makes this technology so different. So many lawyers compare it to when we went from book research to online research, or from handwritten memos to emails. I think that's a helpful framing to remember that we can adapt and change. But to your point, it's not helpful in understanding the nature of what this technology is and what it means for lawyers—because it's not just digitizing or automating things we used to do in an analog world. It is actually doing the things that we currently bring value to. So that's my final thought. Anything to wrap us up today, Bridget?

Bridget McCormack: It still makes me optimistic because, again—I don't know, maybe some people loved that kind of work, that digging through and making sense of different legal sources and organizing it. But I love the idea of having a super-available, never tired, never hungry partner to do that part and then being able to think strategically and creatively with a client about the choices the client could make. So I can still end with optimism, because that's how I feel. 

I think it not only will allow us to do the things we're uniquely poised to do, but also do more of them, right? We can get more legal services to more people, which again is a huge upside. Ending on my positive note.

Jen Leonard: Same. It made me feel positive as well. I'm also a little less worried about legal education after having this opportunity to revisit a legal issue using DeepResearch. I've been really, really troubled about what this means for learning, but I think there are some really interesting ways we'll be able to use these tools and have students learn through the opinions. Reading the actual opinion was still very different and important compared to just reading the DeepResearch report. So maybe I feel less alarmed than I did—but also more motivated to help people who want to think differently about how we transition into that future.

Thanks to everybody out there for listening. We will see you next time on 2030 Vision, AI and the Future of Law.