Making Talk Cheap: Are AI Tools Devaluing Legal Writing?

 

Generative AI is rapidly changing how legal work gets done, and writing is at the center of that shift. In this episode of AI and the Future of Law, Jen Leonard and Bridget McCormack examine how AI tools that produce polished legal text are reshaping writing as a professional skill. Drawing on the paper “Making Talk Cheap: Generative AI and Labor Market Signaling” and a new Wharton study on enterprise AI adoption and ROI, they explore what happens when strong writing no longer functions as a reliable signal of competence, effort, or merit.
The discussion looks ahead to how law firms, legal departments, and dispute resolution institutions may need to rethink hiring, evaluation, and training. As AI becomes embedded in everyday workflows, human judgment, ethics, and client-focused problem-solving are poised to become even more central to effective legal practice.

Key Takeaways

  • Writing Becomes a Commodity: Generative AI tools can now produce clear, well-structured legal text, reducing the value of writing as a signal of skill, effort, or merit.
  • Legal Hiring May Need to Evolve: If most candidates submit AI-assisted, polished work, law firms, courts, and legal departments may need new approaches to evaluating legal talent.
  • Human Judgment Gains Importance: As AI handles more drafting, distinctly human capabilities—judgment, ethics, strategic thinking, and client management—become more central to effective legal practice.
  • Clients Are Moving Quickly on AI: Enterprise AI adoption and ROI are rising, which means legal teams must understand these tools and their implications for risk, compliance, and dispute resolution.

Final Thoughts

AI is reshaping legal writing from a differentiator into a baseline capability. That shift challenges long-standing assumptions about how the profession identifies and advances talent.
For organizations and individuals alike, the priority now is clear: integrate AI thoughtfully while investing in the human skills and professional judgment that will continue to anchor trust in legal services.

Transcript

Jen Leonard: Hi everyone, and welcome back to AI and the Future of Law. I'm your co-host, Jen Leonard, founder of Creative Lawyers, here as always with the wonderful Bridget McCormack, president and CEO of the American Arbitration Association. Hi, Bridget.

Bridget McCormack: Hi, Jen, great to see you.

Jen Leonard: Wonderful to see you, as always. We are here on every episode to break down what is happening in the world of artificial intelligence and what it means for the legal profession. We have three segments on our podcast: our AI Aha!’s – the things that we are using AI for in our regular day-to-day lives that we find particularly interesting; our What Just Happened segment – what just happened in the broader tech landscape that we think our legal audience might want to know about and what it might mean for law; and then our main topic, something that is relevant for legal audiences in particular.

So without further ado, let's kick it off with our AI Aha!’s. Bridget, I haven't seen you in a while. You've been out and about traveling the globe. What have you been using AI for since last we spoke?

AI Aha! Moments

Bridget McCormack: Well, I was using it for a whole lot of things, but the one I thought I would talk about today is back to a specific diet and exercise program – having it evaluate conflicting information that experts I listen to have – and also some habit tracking and reminders for me.

So, it turns out I have real bone density issues. Maybe I'm just of the age where my bone density is an issue. So my physician really wants me to be building muscle, which also allows you to build up your bone density. My mother had significant bone density issues and lots of falls. And so I'm really trying to avoid that issue in my next few decades.

And there's all this conflicting information out there about the amount of protein you should be eating when you're trying to build muscle and build bone density. And it's from different really credible sources. For me, it's confusing, because it actually is hard to do if you believe one version of advice and less hard if you believe the other – but I don't want to believe the other unless it's right.

And then figuring out the best kind of weightlifting and other exercises to do specifically when bone density is your focus. So my focus is not really on losing weight. It's not really about losing weight. I'm not trying to win any race or anything. I literally want to figure out what's the fitness program and the diet – not diet meaning eat less, but diet to address what could be a significant health issue for me in the next couple of decades.

And it's been kind of fun because, like every other chat stream, we're having a lot of back and forth, my chat friend and I. And I will, as always, upload articles that I find which make it a little more difficult for me to think through how it works.

Also, I don't know if you've done this yet, but I've been uploading YouTube videos that are on topic but that I don't have time to watch. And I really don't have time to watch or listen, but I want the learnings to go into this ongoing conversation that we're having.

And I'm about six or seven weeks into trying to follow the plan, and I feel significantly stronger. Obviously I haven't had a new bone density test yet, but I can't wait to see what the results are when I do.

So it's been a way not to bore my friends and family with the deep dive that I'm taking on this particular health journey. And again, it has not disappointed me. So it's been a good one.

Jen Leonard: So when you use the YouTube videos, do you just copy and paste the URL and ask it to watch them?

Bridget McCormack: I do that, or if I have the YouTube video on my phone or iPad, I just upload it directly into ChatGPT.

Jen Leonard: Okay, I'm going to have to try this. I feel like it's an emerging capability. I did try it a while ago and it wouldn't do it yet.

Bridget McCormack: You know, I think it must be, although I don't remember when I last tried. But I've been doing it recently and been really delighted because it's just a shortcut.

Obviously, if there's some video or audio that I need to listen to in full, I'm going to listen to the whole thing. But these probably don't need a full 90-minute listen about this particular topic. I just want to know what the takeaways are and integrate them into my conversation.
Jen Leonard: That's very cool. I'm excited to hear that. Are you lifting heavy weights?

Bridget McCormack: Yeah. I am lifting heavy weights – there does seem to be consensus on that (heavy for me, but still). I didn't want to do that at first – it's not that fun – but I've come around to feeling like I have to do it. And finding ways to get enough protein is really complicated, because I don't want to just eat all day.

How about you? What's your latest AI Aha!?

Jen Leonard: Ugh. So mine is more of an AI uh-oh this time around, because you know that I'm constantly trying to figure out ways to use AI.

I had these notes that I had taken by hand on like six pieces of paper, and I needed to put them into a summary. So I took pictures of them with my phone and I uploaded them to ChatGPT, and I asked, "Can you put these into the summary report I'm writing?"

And it took the writing and put it into a report. I'm reading through the report, and it all seemed really good and plausible for the topic I was writing about. But I had taken the notes, so I knew what they were about. And as I looked, I thought, "I don't think this is exactly what I wrote."

I went back and looked at the notes, and it was not at all accurate to what the notes said. I wrote to it, "These aren't actually what my notes said." It was really funny, because it was just like when I catch my kids lying to me.

You know how ChatGPT shows you when it's talking to itself behind the scenes? It was sort of like, "Oh God, she caught me." It was talking to itself and said, "Okay, I did hallucinate. These were not the right results."

And then, in the background, it was basically saying, "We didn’t have the technology to do this OCR, and here’s why we did it this way." But it’s not talking to me; it’s just showing me how it’s running. Then it’s like, "Maybe what we should do is go back and instead of trying to read through all of the pieces of paper at once, go one at a time and reread everything."

So I’m watching it go through this process, and then it comes back and now it’s talking to me and it’s like a tech bro: "Hey, you caught me. I didn’t actually tell you what was on the paper. I just took a guess because I didn’t have the technology I needed to read exactly what you put on the paper. But I have another workaround."

So it goes back through the conversation, and it did end up with the right responses.
But it was really interesting and a good reminder to know exactly what you’re asking it to do and to verify the outputs. I love AI, but if I did not know what was on those pages, all of the things it generated would have been very plausible. It was fascinating to see it get caught in a lie and then try to figure out how to clean up its own mess.

Bridget McCormack: That's super interesting. I wonder why they can't—I mean, I don't know. I wonder why they can't train it to say, "I have this limitation right now. I can't do whatever the OCR issue was that you needed it to do. Here's a workaround. Should I do that?"

I wonder why it can't give you more of a back and forth when it hits a barrier in a workflow. That feels like such an improvement that wouldn't be that hard to make, and that I would really appreciate. Like, let's brainstorm together how we're going to get it done.

Jen Leonard: Well, what's so interesting to me—and I don't know if it happens every time like this—but I use Claude much more frequently now than I use ChatGPT. I frequently find that Claude is more candid about its limitations and will say, "I don't have the ability to do this."
That was one of the first times I've used ChatGPT in a while, because I tend to use ChatGPT more for multimodal functions. Maybe because Claude tells me it can't do things as frequently, I go back to ChatGPT. But maybe it's better to lean on something that is honest and says, "I can't do this for you," and maybe I should err on just using that instead.

Bridget McCormack: Well, yeah, that's interesting. I wonder if we'll see any improvements in GPT-6—which I understand we might see before the end of the year—to some of those barriers. I don't know. It’ll be interesting.

What Just Happened

Bridget McCormack: Well, why don't we move on to our What Just Happened? segment? As always, there are far more things that have happened since you and I had a chance to talk like this than we have time to talk about here, but we're going to focus on one in particular: a recent study from Wharton. So tell me what the study is about, and what did we learn?

Jen Leonard: Sure. So every year for the last three years—really since ChatGPT hit the scene—Wharton and the GBK Collective have put out an adoption report to tell us broadly what's been happening in large enterprises in the US. Essentially, they survey 800 US senior decision-makers across various functions at senior levels in enterprises that have over 1,000 employees and over $50 million in revenue. This year, the study was conducted between June and July in 2025.

The big takeaway is that GenAI has now moved from an experimental phase to what Wharton is calling "accountable acceleration." So we have moved from an era of experimentation and exploration to—really—AI is now mainstream across large enterprises in the US. And 82% of these executives now report using it weekly. By comparison, when they first started doing this study, only 37% of executives in 2023 were using it weekly, which is a huge leap. But what are people actually using AI for? The top three use cases were data analysis, document summarization, and document editing and writing.

In terms of the money that organizations are spending on AI: about a third of the investment that these large organizations are putting in goes to new tech. One of the interesting takeaways from the study is that about a third of their investment is going to internal R&D. Companies are realizing that they really want to be building their own in-house, custom solutions. And then about a third of the remaining investment is going to training their existing systems in-house.

These enterprises are reporting that there is major return on investment happening. Seventy-four percent of survey respondents reported that they are seeing positive returns already, and 82% report that they expect payoff within two to three years. If people listen to our podcast regularly, or they follow the AI space regularly, or they just exist in the world and see headlines, they'll recall that this summer there was a lot of buzz about an MIT study that said 95% of pilot projects are failing, and there's zero ROI on these pilot projects. 

So this study really counters that, saying that most organizations are actually seeing ROI, and that most organizations are formally measuring ROI. Seventy-two percent have formal metrics to measure ROI, which the Wharton report says shows that AI adoption has reached a stage of maturity in US enterprise organizations and global enterprises.

Bridget McCormack: It sounded like there's some difference across different industries. Not surprisingly, tech and other digital industries are seeing significantly more positive ROI than some of the industries that rely on more physical work, like retail which isn't surprising.

But that kind of segregation of the industry data, I don't think we saw in the MIT report. So this study feels a little bit more robust than the MIT study—which isn't to say that there's not conflicting information out there; I'm sure there is. But this one feels pretty robust.

Anyway, sorry to interrupt. It also had some information about the ways in which people are learning to make the most of it and also still struggling, right? There was some information about that?

Jen Leonard: Yeah. There's also a divide in how people in the organization are perceiving their own organization's progress, with senior leaders being very optimistic. Fifty-six percent of senior leaders say their organization is making very quick progress, but among middle managers, only 28% are seeing significant progress on the ground.

I think that probably reflects the fact that managers are responsible for training the people who are actually doing the really hard work of integrating this into the real systems on the ground, where it's very difficult to figure out how to integrate AI into systems that existed in a pre-AI world. They're dealing with training people day to day who have had jobs that have not required AI integration. They're dealing with all the human friction, which the study exposed as the really difficult part of this next phase of AI acceleration. So that's not surprising.

There's also a paradox that the study exposes: 89% of survey respondents say that GenAI enhances skill development, but 43% worry about skills declining because of the ability of AI to do a lot of the work that used to contribute to people’s ability to build skills.

Interestingly, the survey also exposed a declining confidence in training alone being sufficient to advance the workforces that are in-house, and an increasing reliance on hiring people who already have skills and bringing them in-house to augment the existing workforce—where I think a lot of the conversation in 2024 was about upskilling the workforce that you have.
And the last thing that I'll say—and then I'd love to hear your thoughts on the takeaways from the study—is the emergence of the chief AI role, increasingly as part of the C-suite. That sort of shows that AI is not going anywhere, and that having someone whose exclusive role—or whose role, in addition to other duties—will be structuring, continuing, and sustaining AI as part of the mandate of the organization going forward.

Bridget McCormack: Yeah, it's pretty interesting across the board to see what felt to me like more progress than I expected to see at the enterprise level. Maybe because most of my interactions are with law firms or legal departments at enterprises.

And you've heard me say many times before that in-house teams do seem to be further along than some of their outside counsel—some law firms—in terms of how they're finding value in the technology across their workflows. Some of that makes a lot of sense. They are cost centers at their enterprises, so they're always being asked to find ways to shave costs. And now we have a technology that allows you to do that.

But I was still surprised by how quickly it seems we're seeing positive results across large enterprises. It also made the point, I think, somewhere in the study that originally the smaller startups and nimbler teams were moving faster than large enterprises. And that gap seems to have closed. That also surprised me, because I always think of that taking longer than it seems to have taken according to this particular study.

But it's interesting for lawyers who have clients that are these enterprises. It's important information, I think, for legal professionals to be aware of: if the world of your potential clients or your actual clients is moving quickly in this direction, there are probably going to be new things that they're going to want help with—new legal challenges as a result of using this technology that they'll want advice and partnership around.

I say this all the time: there are going to be all kinds of new legal workflows created by this new technology. And so I don't think it's a reason to be worried about the future. I think it's a reason to plan for the future. But the future is going to need lawyers to think through all of these new workflows.

Jen Leonard: Yeah, absolutely. I was also surprised at the exponential growth in reported ROI and tracking of ROI. I've not seen as much of that in legal—and the optimism and integration of AI that these organizations report.

I think we're still in a really messy period of trying to figure out what it all means. It seems like every report that comes out has conflicting information in it. So it's hard, I think, to suss out what's happening. But it is encouraging to see structures being put up around these efforts.
I think particularly of the chief AI officer roles that are being created—at least they give some focus to the efforts that are happening. I would imagine that sharing best practices and figuring out what's working well through those roles can help organizations really navigate what's happening.

One thing—I can't remember whether it was this study or one of the other ones that we've looked at—that stood out to me was that organizations that explicitly frame innovation as a goal of AI integration are making more advancements than organizations that are only focused on efficiency and productivity gains.

When we do presentations and ask—you and I frequently will open with, "What's your greatest hope or fear about AI?"—almost always, one of the biggest hopes is exclusively efficiency gains. We present mostly to legal audiences, and I rarely hear "transformative innovation in the way that we do things."

You are a leader who has a team that's worked around thinking about it differently. So I wonder what your takeaways were from that.

Bridget McCormack: Yeah. I mean, in some ways it's the oldest story of every big tech integration. At first, there are just the hobbyists. There's the weird guy over in the corner who's using the technology, and you're like, "Okay, whatever." And then there are the folks who figure out how it can bring us efficiencies. That gets almost everybody on board, because everybody likes efficiency. Who doesn't like efficiency?

But I've been saying it this way recently in presentations: you kind of need that step to be able to see what the transformative use cases are. Because once you have those efficiencies, then all of a sudden you have more time to think about, "What else can we now do?" 
You've probably heard me say this—I’m sure I've done it with you in many of our presentations together. When the movie camera was first invented, all anybody used it for was to record plays. And that's great, because more people got to see the plays—people who couldn't make it to the theater. That's really cool. But then they were like, "Wait a minute. We can cut scenes and add effects." 

The new art form wasn't obvious for some time. All people saw that it could do was bring efficiencies to the old art form. So I think the new art form always comes a little bit later. And I think generative AI is exactly the same. We're going to see it across not only art forms, but industries and entire ways of working. But I do think we're still in this period of finding efficiencies to give us room to see what could come next, right?

Jen Leonard: Yeah, and your analogy of the camera and filming plays made me think of lawyers refining their emails. Someday we'll think, "We had this transformative technology, and that's how we started using it."

But I do think we're crossing a Rubicon now where it will be hard to remember what it was like to sit with a blank email and write it from scratch—and that is the beginning of something more transformative.

Main Topic: LLMs and the Devaluation of Written Work

Jen Leonard: Okay, so our main topic, Bridget, is a job paper that was online that Ethan Mollick—one of our favorites—posted to his LinkedIn page. It's called Making Talk Cheap: LLMs and the Devaluation of Written Work. It's written by two professors, Anais Galdin from Dartmouth College's Tuck School of Business and Jesse Silbert from Princeton University.

I would love it if you'd be willing to describe what Galdes and Silberberg were describing in this paper they wrote, and why it's relevant to our listeners.

Bridget McCormack: Yeah. I think this is a really, really interesting paper. Even though it's not aimed at lawyers, I think you and I both, right away, said, "Wow, this is pretty significant and worth putting in the main section of our podcast."

Before I get into the paper, I want to shout out Ethan Mollick for pulling all of this really interesting research to share with others. If anybody wants to go off and look at two others, I'm sure you saw these as well.

One he posted was about an economist who has a job talk paper showing how LLMs help first-gen students with all of the unwritten rules of success in education and in getting your first job—how they close a gap that previously was not closeable by all the usual things like internships and relationships with professors and clubs, why clubs matter, and all these things. LLMs can actually make a difference for folks who just don't know anything about those unwritten rules of success. I thought that paper was fascinating—really, really interesting.

And the other one he posted that I really paused on was—I'm sorry to go totally on a tangent here—a post about how unprepared we are in academia to evaluate new science and new research if, in fact… Did you see that one?

Jen Leonard: Yes.

Bridget McCormack: And he's not wrong, right? Our methodology for evaluating and confirming new science and new research is slow and methodical, and it moves at the pace of humans.

If we start seeing new scientific breakthroughs by large language models in 2026—which is what we're told by some of the lab leaders now, that we'll see at least minor breakthroughs, and then probably more significant ones in 2027 and certainly by 2028—how do we keep up with kicking the tires on that new science and new research?

Maybe we take our human processes and build AI processes out of them, but I think we want humans to be making sure the science works, right? 

Jen Leonard: That's what's going to happen. I saw that post and first I thought it was so insightful, but also, with all the talk about whether there's an AI bubble and what's going to happen with it—and this is for a whole other podcast—I've just been so confused about the last month of product announcements and rollouts. First there's erotica one week, and then the next week we're going to have scientific breakthroughs.

It made me think of post-2008, when I would go to the King of Prussia Mall and there's a Neiman Marcus and then there's a pawn shop right next door. You're like, "What is happening with the world?"

That—but also, if part of the business model is this selling of agents that are going to come up with these breakthrough scientific innovations, and selling them for hundreds of thousands of dollars—how does that work, for the reasons Ethan's pointing out? I don't understand how that math maps.

Bridget McCormack: Yeah. I mean, maybe that's ultimately just going to continue to be a barrier for a while, or…I don't know. I think my brain can't work it through. I hope Ethan's working on it, because I read that and I was like, we want to get the benefit of the new science and the new medicine as soon as possible. My bones are only getting weaker and I'm only getting older—I want the new science.

Jen Leonard: But it does make the point of the paper again, because it's sort of like: the investments are there, the technology is there, the enthusiasm is there. The bottleneck is the humans. And the bottleneck in his post is the humans.
I can't remember which of our favorite thinkers it was who talked just last week about the AGI already being here in many respects.

Bridget McCormack: See?

Bridget McCormack: Yes.

Bridget McCormack: Are the humans? Yeah.

Jen Leonard: But until there are systems to plug it into, it's sort of irrelevant, because it's not prepared—from an infrastructure standpoint, from a social standpoint.

Bridget McCormack: Yeah, we're not prepared to process it and make it meaningful, right?
All right, back to the paper—and apologies for the off-script tangent, but another plug for just following Ethan to find the most interesting papers and ideas around this technology.

Okay, so this paper was super interesting to us because it is about the way large language models can—or do—devalue written communication, because all of a sudden they put everybody on equal footing, or they remove the barrier of effort that made writing valuable.
In most legal jobs, being a good writer is pretty important to success—not in all, but in many.

And before LLMs, writing was valuable—not only in legal but generally—because it was hard. It's hard to make it clear. It's hard to make it sing. It's hard to make it not boring, not repetitive—well-crafted with strong analysis. Good writers really stood out.

We spend a lot of time teaching law students how to write legal memos and legal briefs, and how to write them well. Law firms, especially in litigation practices, spend a lot of time working with their junior associates on improving their writing and turning them into excellent writers.
And now, all of a sudden, after this technology, anybody can produce very polished writing. So that cost barrier has collapsed, and written work becomes really, really cheap. That means it's harder to assess the quality of a candidate or an argument if everybody can produce basically the same quality of writing.

It's a super interesting paper, and it goes into some pretty specific examples of what this means.

There are ways in which writing has always been a signal for a lot of different aspects of quality, right? Writing is a way we measured someone's quality of thought. If you were a good thinker, you showed that in your good writing. Your expertise was something you showed off in your writing. Hard work—you know?

I will tell you, as somebody who employed many different clerks over 10 years on the bench, I could tell when somebody put serious effort and work into a memo I was reading. And I can tell you when I saw the opposite—that didn't happen very often, but you could certainly tell.
Writing has also historically been viewed as a way of evening the playing field, right? You didn't necessarily have to know the secret handshake to get into the club if you were an excellent writer—both in your cover letter and in your writing sample.
When you went to law school, did you still always have to send a writing sample to every job? I did. That was what we sent—not just for clerkships, but every job wanted to see a writing sample. That was really the way we measured quality.

Some would argue that it was a way you could even the playing field, because somebody who didn't come from a background where they had an easy entry into a prestigious legal career could find their way in if they were an excellent writer. It was a signal of so much quality across the board.

Now, without that, the question is: how do we evaluate quality? What are the new ways we can measure quality? And what will it mean? What will it mean for the labor market?
You and I are sort of interested in what it will mean for the legal labor market, but this paper is obviously broader than that.

In terms of how lawyers advance throughout their careers, writing has also been another—at least in most careers—important gating item, right? People didn't usually advance to the top of their careers without being pretty masterful with the written word. Even if they did it less and less as leaders—if it became less a part of their day-to-day job—it was certainly a gating item in just about every legal vertical I can think of.

And now it's kind of disrupted. We're in this weird phase where everybody can produce pretty polished written product in lots of different formats. So there’s a set of interesting questions about what that means, what it will mean for the labor market. I think we should focus a little bit on the legal labor market, but also: what does it mean for legal writing that is impactful for legal decision-making?

In most courts, a lot of arguments are made in writing. Not all—sometimes they're just oral, and maybe that's a better way of evaluating quality now. But briefs in appellate courts and motions in trial courts are kind of the main currency for winning or losing in a dispute. So that maybe changes as well.

There's maybe some good news there that I could imagine, but it’ll be interesting to see how it all plays out.

So what are your thoughts on what it means for junior attorneys in particular, and how are you thinking about it?

Jen Leonard: Yeah, I mean, everything you just said made me think about all kinds of new things that I wasn't thinking about before we started. I guess I hadn't even really reflected on—and that's the whole point of the paper—what I was signaling, or what law students are signaling, when they apply for a job through their writing sample.

I guess I was under the naive impression that they were signaling that they could write well, which was important for their actual job. I wasn't viewing it as any bigger an indication than that—that they could write well for the job.

That's the point of the paper, right? That the signal you're sending, when you have asymmetric amounts of information and the employer doesn't know you, is that you put in care and time, and you understand how to compose something that is quality, and you're able to persuade—in a pool of other applicants where you don't know any of them—that you are the best of the bunch.

When that collapses and everybody has access to that same capability, you don't have that ability to persuade anymore in the context of getting the job itself. But then once on the job, do those skills even matter as much anymore either?

So there are two separate questions, I guess. One is the signaling in the candidacy itself and the effort that you put into it. The other is: are you even demonstrating skills that are increasingly relevant to the position that you're applying for?

In the study they ran, they were applying for coding jobs, where it wasn't really that relevant to be a good writer. But in our field, it is relevant to the actual role.
So once you get into the job—and like you said, there's so much effort put into teaching you how to be a strong writer as a junior associate, and at that level it's particularly important, or as a law clerk—what happens to those jobs where, through effective prompting in particular and working alongside an AI, you can produce quality outputs?

Actually, part of the paper talked about how the candidates who continued to rely on their raw talent ended up worse off than the candidates who used the AI more. So do you start seeing that dynamic emerge?

Bridget McCormack: Yeah.

Jen Leonard: And the thing that I worry about—I haven't answered any of the questions we just surfaced—the thing that I worry about, which you started to touch upon, is: do we end up further away from a meritocratic system, where we just start to increasingly fall back upon proxies of quality related to the brand of the school that you graduated from, the networks that you have, the recommendations of people where there's already social proof? Because you no longer have that kind of signaling ability to rely on your own skills.

Bridget McCormack: Yeah. I mean, I'm probably showing my own personal interest in the topic by focusing so much on that. But, you know, I didn't have anyone in my family who was a lawyer when I went to law school. You didn't either, right, Jen? You don't have any lawyers yet.
And I found the first semester of my first year super confusing. I was like, "I don't even…what is Rule 11? Rule 11 what?" I literally just had no access. I felt like it was a secret set of rules and I didn't even know where to figure out where they were.

But I did feel like I was a good writer, right? I felt like I could—and I knew how to read and I could write, so I felt like I was going to be able to continue to compete.

Then, as someone who has hired both on a law school faculty and on a court, it was writing that I looked at—for me—significantly more than all those other signals. You might have gone to a top-10 school, but if your writing was mediocre, that mattered to me a lot more than your school.

So I worry about that, because if everybody's writing now rises to exactly the same level—why wouldn't people fall back on some of those other signals that I think have the potential to preserve inequality and prevent opportunity in ways that I don't think are healthy?

I'm probably showing my own biases in focusing on that aspect of it, but that does concern me.

Jen Leonard: Well, maybe what ends up happening in some places is… I remember about 10 years ago, there were a couple of firms that experimented with in-person interviews where they would have their incoming candidates come in and sit for a couple of hours and write in a room in response to prompts, so that they could assess the candidates. And the candidates hated it.

So the firms ended up dropping it for that reason, because they were alienating the candidates. But maybe that actually becomes a preferred practice, because the firms are demonstrating that they care more about allowing candidates to show their skills than the proxies of prestige of the school that you go to, or other signifiers that really don't allow you to demonstrate your skills.

Bridget McCormack: But I mean, that assumes that that skill will matter. Does writing on the blank page really matter once we can write a whole lot more and a whole lot better using our super-amazing language calculator that we all now have on our computers?

So I don't know. There are other things besides writing that matter in terms of being a successful legal professional. There are interpersonal skills and creativity, right? The ability to collaborate and find new solutions. It might just be that we have to completely rethink how we hire—totally differently.

I mean, I don't know. Do you think there's an intrinsic value in writing on the blank page?

Jen Leonard: I'm the wrong person to ask, because I do not have your strength as a writer. I prefer to think out loud. So no—really, I process out loud better. I'm an externalizer. I did not thrive in legal writing.

But I do think there's value in all kinds of communication. In the law, writing tends to get most of the spotlight, and most lawyers, I think, elevate writing above all. I think there are other ways to communicate that go beyond oral argument as well.

I think the paper might touch on this—or some of the commentary around the paper—that in a world where writing has become super cheap, meeting in person, being able to build relationships, being able to navigate interpersonal dynamics become the coin of the realm.

Those are things that we need to start assessing for, because they're the differentiators. And that changes a lot of the culture in legal, I think.

Bridget McCormack: It really does. That's super complicated, because it's really hard for those assessments not to be very subjective and not to fall back on, again, proxies—whether explicit or very implicit. We don't even understand all the ways in which our brains make proxy decisions. I don't know. That worries me a little bit. But yeah.

Jen Leonard: Yeah. What do you think about academia?

Bridget McCormack: Yeah, I do think it's…I don't really understand how academia continues to value scholarship to the same degree that it has—at least in legal academia—historically. Again, I don't think large language models can produce a publish-ready—publishable—paper that really does add value to a particular ongoing debate. But they certainly reduce the time from start to publication significantly.

So whatever the current number of papers or books required to get tenure is today, it doesn't seem to me like that will make sense tomorrow, or five years from now, or ten years from now, right?

I wonder if this is an area where law firms and other legal organizations that are thinking about building their pipeline for the future—not only the new people that they hire, but also how they then support and evaluate people along the way—could really differentiate themselves in the market, right?

If you're already starting to think about the change that this brings and how you're going to meet that change, or meet the sort of new world that we live in, and make sure you're still attracting, recruiting, and supporting a diverse—small "d" diverse, meaning different kinds of—group of people that will make your business successful, or your organization successful, or your court successful 10 years from now, and 20 years from now, and 30 years from now, instead of giving up on figuring out how to evaluate talent from every corner of the market and just falling back on old habits.

Jen Leonard: No, I hope we don't either. I will be very interested to see—and I think a lot of it will depend on the economy in the next couple of years, and if there's an economic downturn and there's enormous pressure on firms.

Right now, I think they are finding their footing and figuring out what all of this means. I just saw reports this week that Q3 earnings for firms are up. So they're trying to figure it out in a time of relative health, even though there's uncertainty.

But if that changes, I do get nervous that a lot of the intentionality that you would hope we can bring to this will fall by the wayside. That's what worries me.

Bridget McCormack: Yep. Well, there are opportunities for legal organizations to focus on new ways of recruiting, supporting, and building their futures. I like to see everything as an opportunity more than a challenge. So there's my optimism at the end.

Jen Leonard: Absolutely. Same.

Well, great conversation—lots to dig into. I don't know that we solved anything, but we really surfaced a lot of opportunities.

Bridget McCormack: Okay. I think we're not here to solve. We're here to have conversations that others can then take and solve. So I hope we inspire others to think about cool solutions to some of these new challenges.

Jen Leonard: Same. Well, thank you so much, Bridget, as always, for spending your very valuable time with me and digging into these really cool and interesting issues.

Bridget McCormack: Great conversation, as always. Great to see you.

Jen Leonard: Great to see you too. Thanks to everybody out there for spending your time with us. We look forward to the next episode of AI and the Future of Law. Until then, be well.

 

November 25, 2025

Discover more

How AI Is Changing Legal Education with Dyane O’Leary and Jonah Perlin

Live from MAICON: Building AAA’s First AI Arbitrator

AI in LA Courts: David Slayton on Access to Justice