How AI Is Changing Legal Education with Dyane O’Leary and Jonah Perlin

 

How should law schools teach judgment, writing, and readiness in the age of AI? Georgetown’s Jonah Perlin and Suffolk’s Dyane O’Leary join hosts Jen Leonard and Bridget McCormack to explore how generative AI is reshaping legal education—from 1L writing and grading to policy, ethics, and the “practice-ready” lawyer.

They unpack how professors are balancing AI literacy with academic integrity, reintroducing closed-book exams, and designing assignments that build both skill and skepticism. The conversation traces how legal writing programs are using generative AI for drafting, feedback, and reflection, while preserving the human judgment core to lawyering.

Finally, the group explores what “teaching with AI” really means—from classroom prompting experiments and student-built question sets to the rise of multimodal and voice-driven tools that are redefining how future lawyers think, communicate, and learn.

Key Takeaways

  • AI in the Classroom: Law professors are using generative AI to teach reasoning, writing, and judgment—turning classroom experiments into lessons on what lawyers uniquely bring to the work.
  • Balancing Rigor and Innovation: Legal writing programs are integrating AI literacy while reinforcing integrity, critical thinking, and authentic skill through redesigned assessments.
  • Practice-Ready, AI-Ready: Future lawyers will need fluency in both technology and judgment, as multimodal tools reshape how students learn, draft, and communicate.

Final Thoughts

AI isn’t replacing legal education — it’s reframing it. The professors shaping the next generation of lawyers are blending traditional rigor with AI fluency, ensuring future graduates understand both the tools and the timeless skills that define great lawyering.

Transcript 

Introduction

Jen Leonard: Welcome, everyone, to the AI and the Future of Law podcast. I’m your co-host, Jen Leonard, founder of Creative Lawyers, here as always with the phenomenal Bridget McCormack, president and CEO of the American Arbitration Association. Hi, Bridget, it’s wonderful to see you.

Bridget McCormack: Hi, Jen. Great to see you, too. I think this is like meeting two of four or five for us today. So, great day for me.

Jen Leonard: Great day for me as well. We are exceptionally lucky to be joined by two of our favorite legal educators: Jonah Perlin from Georgetown University Law Center and Dyane O’Leary from Suffolk University Law School. Dyane directs Suffolk Law’s nationally recognized Legal Innovation and Technology (LIT) program, and she also directs the law school’s LIT concentration and is a professor of legal writing at Suffolk. So we’re thrilled to have Dyane here. Hi – welcome, Dyane.

Dyane O’Leary: Hello! Thanks so much for having me.

Jen Leonard: Thank you for being here. And we’re also joined by Dyane’s good friend – we’ve watched them interact a lot on LinkedIn, which is why we have them here today. They’ve been talking a lot about AI and legal writing. Jonah Perlin is a professor of law and legal practice at Georgetown University Law Center. He focuses on first-year legal practice as well as advanced legal writing courses. He is also the host of the How I Lawyer podcast, a top 30 careers podcast for junior lawyers. Welcome, Jonah.

Jonah Perlin: So great to be here. And, to be honest, what I’d like to share is that Dyane and I are actually real friends outside of LinkedIn as well – which is not always true. So we’re pocket friends and real friends.

Dyane O’Leary: Let’s just say it’s fun to see Jonah at a legal writing conference. That always spices up my day.

Jonah Perlin: Love it.

Jen Leonard: Did you say “pocket friends”?

Jonah Perlin: Yeah, that’s a term I picked up from somebody else during COVID for the friends you make online that you either haven’t met or haven’t met often in person. My wife calls them my pocket friends. So I’ll say something like, “Oh, my friend told me that there’s this great new game for our kids,” and she’ll ask, “Was that a real friend or a pocket friend?” And my answer is always: pocket friends are real friends. 

Bridget McCormack: I consider my pocket friends very real friends – so I’m with you guys.

Jen Leonard: Well, we’re so thrilled that you’re here today. We’re going to kick off with one of our favorite segments, our “AI Aha!”s, which help inspire our audience to think about how they might use AI by hearing from you what you’ve been doing with AI that you think is particularly magical or interesting. Dyane, we’ll start with you. What have you been using AI for recently?

AI Aha! Moments

Dyane O’Leary: Sure, thank you. I first have to disclose that I fully copied the “AI Aha!” idea and started my gen AI class with it — with attribution to both of you and your podcast, of course. My students start every week with “from the news” what stuck from the reading or video assigned and an “AI Aha!”. It just kind of kickstarts and orients us. And it’s an online class, so it’s a great way to settle in. So that’s thanks to you – I completely stole that idea. It’s worked well, and students love it.

I have plenty of examples from, you know, the elementary school math problems to taking a picture of the broken thing in my garage and asking ChatGPT to help. But because I’m here as a teacher, I thought I’d share something I did last week in class. I’m teaching Gen AI and the Delivery of Legal Services right now. It’s a 35-student upper-level elective (2Ls, 3Ls and 4L evening students).

We’re starting the course slowly with kind of a “behind the curtain” look at what this technology is actually doing – trying to do a mini vocab lesson on these heavy terms like vectors and weights and tokens, and trying to understand the magic. So we did an exercise where students, on their own using any tool they wanted, filled in the blank in the sentence, “The alien ate the sandwich after the hockey game because …” and we all put in the chat at the same time what our output was.

And we looked at it. Then right after that, we did, “The motion for summary judgment should be denied because …” and all 35 outputs went into the chat. We paused and looked at them. And you all can probably guess that with the first sentence, the model knew nothing coherent – it was all over the place. Like, “alien” and “sandwich” and “hockey” – the kind of word statistical mash-up, it didn’t really know what to do with all that.

We talked about why condiments showed up – mustard and ketchup – right? (Because of the sandwich.) A lot of the answers had the word “intergalactic” in them, and some had, you know, “celebration” or “trophy,” because those relate to hockey. 

And then the summary judgment ones, not remarkably, were much more bland and kind of clustered around the same output because the model had an easier time with that.

Right – based on its training data and the way those, frankly, boring legal words were connected. So my AI Aha for them was seeing that light bulb go off: Oh, that’s where the prediction engine is going wrong. Oh, that’s why it’s funky with that prompt, or that’s why it’s doing better with the summary judgment prompt. So that was my learning AI Aha for me and my students just last week.

Bridget McCormack: That’s pretty cool. It’s a great way to illustrate vector embedding, right – to show what vector embedding is to a new audience. I like that a lot, that’s fun.

Jen Leonard: I was with a bunch of lawyers this weekend at a cocktail reception, and people were saying they listened to the AI Aha’s. They said, “We love listening to Bridget and how she’s doing these mind-bending AI Aha’s – she’s bringing people back to life and creating humanoid robots and running all these amazing experiments. And then you” – meaning me – “offer yours, and it’s like you’re inviting your neighbors over and just splitting the check at your local restaurant.” I’m like, I could do that one! So I provide the Aha’s for the everyman. We like to inspire from the sublime to the mundane. It’s good to hear we’re reaching your class as well, Dyane.

So, Jonah, what have you been using AI for?

Jonah Perlin: Sure. I’m currently on sabbatical, so I’m focused mostly on the writing side of my job – which is kind of odd, because I usually spend most of my time on the teaching side. One thing I’ve been experimenting with in my writing is using AI. And when I say using, I mean using basic ChatGPT, basic Claude, basic Gemini – not specialized legal models – to sort of be a discussion partner for the articles that I’m writing.

Everybody writes differently, right? I’m one of those writers who needs to write the same sentence 50 or 60 times before I’m happy with it. That means I write a lot. I have a lot of messy drafts. I have no problem moving on to the next paragraph before the first paragraph is perfect, because I know I’m going to come back and rewrite it a bunch of times. But unfortunately, in the past that conversation about my writing just lived in my own head – which is not that helpful a lot of the time. Ultimately maybe it’s helpful, but it takes a long time and a lot of errors.

So what I’ve been doing is writing a paragraph that’s not perfect – a messy first draft, “draft zero,” whatever you want to call it – and then not just asking the AI to “make it better,” because, like Dyane’s example illustrates, I don’t think it even knows what “better” is, even if I give it a lot of information about purpose and audience and document type. Instead, I ask it to critique and offer advice in the mindset or voice of particular people. I pick key scholars from my field that I’m pretty confident the AI will know from its training data.

It’s been extremely helpful. Now, caveat – I don’t know if that’s exactly the feedback those people would give, but it’s useful feedback in the sense that it puts my draft in someone else’s voice. And I tried a little experiment: I did this, and later I got real feedback from one of the people I often use in my prompt (I won’t out them on the podcast). His feedback was almost exactly what the AI had predicted. And I was like, wow – sample size of one, but still, pretty wild!

You can have this thought partner that’s not just your friend or your spouse or your kids, but a thought partner that’s been trained (linguistically, at least) on any legal scholar in history. It’s been an interesting experience. Sometimes it gives me terrible outputs, but 99 times out of 100 it’s giving me really helpful, thought-provoking ideas.

Jen Leonard: I love that.

Main Topic: AI in Legal Education

Jen Leonard: So let’s dive into thinking about legal education and how you all are thinking about it. I find it very hard to think about the students who are coming into law school today, in fall of 2025, and what they’ll need to practice three years from now. Because the technology’s changing so quickly, and because legal education – we all know – does not change as quickly, by design.

From your perspective, what do you think every law student should know about AI, regardless of how the technology changes over the next three years when they step into practice? Dyane, maybe we can start with you on this one, and then hear from Jonah.

Dyane O’Leary: Yeah. Thanks, Jen. So, I’ll caveat: there’s not time to cover all of it, right? We could redesign the entire three years of law school to focus on gen AI – but even saying that out loud, no, of course we shouldn’t do that. There are a lot of reasons we should redesign law school in different ways, but I don’t know that it should all be for gen AI.

You know, one thing I think is a healthy dash of humility. The landscape is changing every year. Now, even with these students – the fresh crop from the last few weeks – students are coming in more and more experienced with these tools, compared to even just a few years ago.

But what I think every law student should know is that using these tools in their personal world (which is terrific, and as we’ve discussed, a great way to play and learn and get comfortable) is still not the same as using them in legal workflows. So I think what every student needs is a healthy dose of, “I’m not a lawyer yet. I haven’t experienced legal workflows. I don’t understand how things work yet.” And thus a healthy skepticism or patience, if you will, about how they’re going to use these tools as lawyers.

If I can offer another point: I also think students need to have a question set. I have been so surprised to hear my students come back to me after internships and externships and jobs saying they’re either told nothing about the use of AI, or they have one little policy that’s pretty unclear about what they can use and when – or, on the flip side, some are getting great direction and training. So I tell my students: so much is going to change, but at least have, you know, four or five questions that – no matter when you leave this building – in any work environment or clerkship, you ask. You ask, “What am I allowed to use? What’s the policy here? What’s the lay of the land?”

So yes, we can talk about prompting skills and all those things for today’s students. But to me, the basics are: the humility to know that you need a legal skill set to use these tools, and also knowing enough to ask what you don’t know – to figure out what your environment is doing and is comfortable with. Because as we all know, that landscape is hugely varied right now, and I think it will be for a while.

Jonah Perlin: So, first of all, I would agree with everything Dyane said. I think being able to have a question set is probably the most important skill we can give our students going forward.

What I would add – or maybe phrase a little differently – is that on the one hand, I think nothing should change about law school. And that’s sort of an extreme way to put it, but what I mean is: going back to, well, what do you learn as a law student? Right? I often say on my podcast that one of the best parts about getting a J.D. is you can do anything with it, and one of the most challenging parts about getting a J.D. is you can do anything with it. By definition, we are training individuals to do lots of different things in a three-year period, in a very confined amount of time and space.

One thing we try to give law students in their three years is we change the way they think, the way they read, the way they act, the way they understand the world. That part shouldn’t change. That is going to, in my view, be even more important in an AI-enhanced legal world – because that’s what differentiates us. It’s our judgment, it’s our strategy, it’s our experience, it’s our reasoning.

We have a way of teaching those things. Now, I agree with Dyane – I don’t think we’re always doing the best job of teaching them, and there are ways to change our legal pedagogy. But ultimately, the ability to think like a lawyer is going to become, I think, more important, not less important, in an AI-enhanced world.

At the same time, what it means to “act like a lawyer” is changing much more rapidly than maybe it did 5, 10, 15, 20 years ago. And because of that, I don’t think we can – or should – have the goal of teaching every law student to be able to, like, explain AI as it exists today. I think it’s a good goal for them to be able to have those next-level conversations, but instead I think we need to give them the meta-skills of: How do I deal with a brand-new tool?

How do I deal with a tool that, by definition, didn’t exist when I went to law school? That’s what the Model Rules of Professional Conduct are going to require of future lawyers, and it’s what future legal employers are going to require of future lawyers. So, on the one hand it’s “keep learning to think like a lawyer,” and on the other hand it’s giving people skills and techniques to handle the brave new world that – by definition – will always be braver and newer the second they get into it.

Dyane O’Leary: Jonah, that reminds me of something our dean at Suffolk Law School always used to say when he started the first innovation and technology collection of classes over a decade ago: it was about teaching a new kind of issue-spotting. Up until then, law school was always about spotting substantive black-letter law issues. And his point, in terms of modern legal practice skills, was that a new type of issue-spotting is: Where can I spot new efficiencies here? Where can I find new ways to do this in a more efficient, effective way that saves my client money or time or improves my workflow?

That ties into what Jonah said, in terms of the meta-skill to approach a workflow. Plenty of our students, frankly, are still struggling with using Word and Excel and all sorts of other tools. So I agree with Jonah that our role as educators is maybe to plant the seed in a somewhat tool-agnostic way – because we don’t know, even today, what our students are going to be using, let alone in 3 or 4 years.

Bridget McCormack: That’s all super interesting. And it sounds like you all agree an awful lot – it sounds like we have a lot of alignment in this conversation about what role law schools should play in a rapidly changing technological world. It feels totally right to me that unless our students have the background legal architecture and understand, you know, what is public law, what is private law, what are the sources of law, where do I go to find the sources of law to make sure their correct – the tools are not that useful to someone who’s going to use them as a lawyer, right?

Law schools are going to continue to have this fundamental pedagogical mission that in some ways is unchanged, and will be unchanged no matter what the technology does. On the other hand, I think – Jen’s question was kind of focused on how lawyers practice their craft – and it clearly is not going to look, in 3 years, 5 years, 10 years, like it does today, right? And law schools have some obligation to make sure that their students are practice-ready, whatever that means. I know there’s lots of debate about what “practice-ready” means.

And this is a literally “I want to know” question: I feel like law schools have so many constraints in thinking about what they can do differently. You know, the regulators require a lot, your students require a lot, your alumni require a lot, your administration requires a lot – it’s hard to innovate wildly. What do you think law schools should be doing to try to figure out how to produce practice-ready lawyers for a changing world? And maybe, what are the experiments you’re seeing – either at your law schools or hearing about at other law schools – that you think merit more attention?

Jonah Perlin: I agree. I think, Bridget, you capture that tension – which, frankly, is not that new, right? I’ve had a bunch of conversations recently with my colleagues who have taught legal research and writing for not just years, but decades. And they talk about the shift of what it was like to teach legal research and writing with book research, as opposed to online research. That was a sea change that fundamentally changed not just what they had to teach, but also how they had to teach it. So that part, I think, is not entirely new – although it’s certainly more expansive now, because I think this touches literally everything a law school does, right?

The challenge – I’ll just be straight up – I think the biggest challenge law schools are facing right now is they’re worried about how to deal with AI in the present in relation to how they grade students. And I think any conversation about preparing practice-ready lawyers is always going to be next to – or maybe behind – the question of, how are we going to do what we’ve done for the last 100-plus years since Christopher Columbus Langdell created the curriculum, right? How do we test students? How do we do an issue-spotter exam where, frankly, an AI can issue-spot almost as well as the average 1L? Because that’s a perfect task to ask an AI to do.

So I think the challenge, from my perspective – and I want to listen to Dyane on the experiment piece, because I think she’ll have more on-the-ground advice – but the tension I keep seeing is: How do we grade our students in a way that makes them practice-ready? And I don’t think there are answers to that question yet. And frankly, I think I’m maybe in the minority, at least in my home institution, in believing that we need to integrate AI throughout the curriculum – in graded assignments and ungraded – in order to do this right. We can’t just put our head in the sand.

But I think reasonable minds can differ, and my colleagues who would come out on the other side have really compelling reasons why we should pretend that AI doesn’t exist – as one path to getting AI-ready lawyers.

Dyane O’Leary: Yeah. Jonah, I think reasonable minds couldn’t differ more on this from what I’ve heard over the past three years of how this has evolved. You're right: the knee-jerk academic integrity reaction is still front and center at a lot of universities – locking it down, prohibiting it, basically asking, “How do we work our old model into this new world?”

That said, I think that has certainly changed at a lot of law schools, in different pockets. So maybe it’s helpful to think about where and how AI is getting into law schools. One is electives. There’s a new group of “AI law professors,” and some of us have taught these classes before. But certainly I have folks reach out to me probably every week who are designing a new AI elective for their law school – “Can you share your syllabus?” So that’s a pocket, right? It doesn’t touch every law student, but it’s a growing set of electives, just like e-discovery didn’t exist 20 years ago as an elective, and now it does.

So there’s that pocket. The other huge one is the legal writing course, which does reach all of your law students, and in an important way in a small setting. And that’s been kind of a sea change as well, I think. I’m not going to guess how many, but many legal writing programs are experimenting with how to get this into their students’ experience – for the reason you just said, Bridget, recognizing that this is part of the practice of law now.

The pushback to that is, “Well, they’re going to outsource their work. They’re not going to know how to do it – they’re just going to ChatGPT their legal memos.” But what’s so remarkable to me is everyone I’ve seen or spoken with who’s integrating AI into that first-year class is doing it with this really terrific balance – kind of what Jonah said, back to basics. And here’s an example: Yes, you might work with students on prompting and evaluating outputs and talk about the ethical use of AI. But then maybe two weeks later, you’re doing an oral exercise where they have to come into my office and brief me on a legal issue.

Now, they can use ChatGPT all they want to prep for that, but when it’s just me and them, face to face – sitting and talking through a problem with someone is a hard legal practice skill for today’s generation, but so valuable, because you can’t cheat it. You have to really know the information.

It seems more legal writing programs are also shifting a little bit back to a closed-book type of exam. For instance, at Suffolk we’ve embraced AI – we’re integrating Hotshot’s legal AI modules into the first-year fall curriculum, we’re doing an exercise on AI outputs and prompting and all the things – but for the first time in a while, our students will take a closed-book legal writing final in December with just their brain, their fingertips, their laptop, and a closed packet of materials. They’ll have to draft something in three hours.

So again, when people worry “oh, we can’t revamp the entire curriculum for AI,” the truth is most programs are doing it with a balance. And it feels kind of counterintuitive, but actually – like Jonah said – you need those critical thinking skills yourself to use these tools effectively. So how do we ensure both are happening? And also, there’s the bar exam, right – we still have to prepare our students for the bar exam. So that’s kind of the carrot for students, if you will: you’re going to learn this technology that will help you become more practice-ready, but you also need to develop that skill set that all of us developed decades ago, for all sorts of other reasons.

So I feel like that’s where law schools are at. And I’ll add one thing I didn’t say: I don’t think the core curriculum of teaching doctrine – with that issue-spotting, the one stop exam and maybe a take-home paper – has not changed one single bit. We’re picking at the edges, but from my point of view, that core hasn’t changed at all.That’s what I’ve seen over the last year or two.

Jen Leonard: Yeah. Can I tie together the comment you just made, Dyane, with a comment Jonah made earlier? Jonah focused on the present concerns – you know, what does this mean for current academic performance, how we assess, ensuring academic integrity – and you described the experimentation and chipping around the edges through electives.

I want to connect that with a very real question I get in my own family. My stepdaughter is seriously considering law school in a couple of years, and I feel like I used to have a fairly solid response for people considering law school: work for a few years, think about what you enjoy doing, align that with what lawyers do every day, go spend time in a lawyer’s shoes or shadow a lawyer and see if that’s really what you want to do.

And now I’m not sure that that’s the right advice anymore, because I spend a lot more time these days working with legal organizations trying to adapt to the change created by AI – and I recognize what you’re saying, that schools are focused on how to transform today.

So, what should the advice be to people thinking about going to law school in a couple of years? Should it be the same advice as always, or should people be thinking differently about it?

Jonah Perlin: Yeah. I mean, I hate to sound like a broken record – which is an odd piece of technology to bring into this conversation – but I think it’s still good advice to see what lawyers are doing today. I think like so much in 2025 and the modern age, the answer is yes-and, right? I think it’s important to see what actual lawyers do in order to figure out whether this is the profession for you, or what kind of lawyer you might want to be.

One of the great parts about being a lawyer in 2025, or going to law school in 2025, is there is so much more information out there about the different ways you can use your J.D. – in ways that just were not true before. It used to be everybody went in assuming they were going to do one of two things, and they kind of fell into that by default.

What I’m seeing with my students now is they come into law school recognizing that there are lots of different things to do. But to navigate that effectively, you need to do more work on the front end figuring out what fits your skill set, what fits your interests, how you want to spend your days. I don’t think AI changes that at all.

At the same time, I think this is an opportunity for junior lawyers. I just wrote a paper about billing and generative AI. There’s a lot of – I think, rightfully – concern that we’re going to lose all the tasks junior lawyers have traditionally done. I think there’s some truth to that. But I actually think there’s a real opportunity for junior lawyers to stand out in today’s legal organizations and law firms, because they are going to be more native in this technology. And to the extent that some of this is playing without a playbook, they actually have a better set of skills and intuition and experiences that will help their organizations.

The version of that in my own career was when I was a junior lawyer: the partners didn’t know how to use PowerPoint, and all their clients wanted people to use PowerPoint. And guess what – I got put on case teams and got to do things I never would have gotten to do based on my level of legal knowledge, simply because I could use a tool. And I think that, times ten, times a hundred, times a thousand, is a real possibility with AI.

So I think you should tell your stepdaughter and others: go see what lawyers do, try it on, see how it fits – but also try to build some of these tech skills that I think are going to be table stakes for the next generation of lawyers.

Dyane O’Leary: Yeah, I agree with that, Jen. I think the challenge might be trying to observe – dare I say – some higher-level lawyering. And what I mean is, this has always been a problem. I mean, I was a BigLaw lawyer billing hours doing document review – if someone had observed me in those first year or two, I don’t know that they’d be impressed or excited to become a lawyer, watching me clicking away on document review at a big firm.

That being said, if they maybe saw a senior associate doing a deposition, or a junior partner leading a large mediation… Right. So I think the reality is the “lower level” work – if we call it that – in law (I’ll put that in quotes) has always existed and has always been part of a lawyer’s workflow. So it really matters at what level you’re trying to observe.

If you’re serious about going into law as a profession, it’s maybe even more important now to see what’s beyond those tools. What’s the higher-level counseling? How does someone become a trusted business advisor? What does it look like when a lawyer’s taking a call from a client in crisis? Or what about a prosecutor who has 40 motions on the docket that day, and generative AI has not impacted their workflow whatsoever?

I’d love my students to get into a courthouse in Boston and just watch and learn. I think those of us who are really involved in the AI space sometimes forget that it hasn’t touched every aspect of lawyering. I think it will, and I think it is behind the scenes – but there’s still plenty of the profession kind of out there, so to speak, for those students to observe and learn from.

Bridget McCormack: Do you think that this technology is just a fundamentally different challenge for law schools than prior waves of technology? Jonah, I was actually in law school in the old days where we still had to use the books to Shepardize, and people would hide the books – that was like a whole thing. (The technology was available, so some of us were like, why do we have to do the book thing? Is that really a good use of our time? How about we spend more time on the substantive law, not this book-hiding business?)

So I don’t know – how do you think of it? Is it fundamentally different than other tech advancements, or do you think, no, it’s just the latest tech advancement? You know, lawyers figured out how to use email, and they figured out how to use online search, so they’ll figure this out too.

Dyane O’Leary: I do think it’s fundamentally different. The line I draw in my brain – and I don’t know if this is the right one, correct me if not – is between the business of law and the practice of law. To me, the internet and COVID’s remote-lawyering boom (and all the skills that came with that – like I want my students to be aware of how to do a Zoom interview, and all those skills) were always about the how of lawyering, the processes, the business side.

To me, this technology strikes more at the substance of lawyering. It really does – the words, the way we build our arguments, the essence of our work product. Those aren’t just channels of how lawyering happens; it’s the actual substance. So that’s probably oversimplified, but that’s where I come out: it does feel different than simply reminding students of, say, their obligations under cloud computing policies, or telling them to wear a suit on a Zoom meeting after the pandemic.

This feels like a different opportunity and a different challenge, because it’s striking at the heart of how we actually build the work product.

Jonah Perlin: Yeah, I think we may have found a place where Dyane and I disagree a little bit. I actually think this is a natural progression of tech change in lawyering. When I was in law school – which was at the time we were switching from books to… I was in law school during the switch to natural-language research (going from Boolean to natural language). So, like, there are always changes in every generation.

I also took a class that was taught by an adjunct professor, because we didn’t have any faculty – I went to Georgetown, where I now teach – who could teach an Internet or Computer Law class. So I had an adjunct teach it. We now have, I think, a faculty of 20 or 30 who just do Law and Technology – and that’s not even counting people like me, who focus more on technology in law.

I took a class called Cyberlaw, which was essentially a one-stop shop: we did one week on First Amendment, one week on Copyright… Now, that concept feels so foreign – like, how could you not talk about technology in literally every single doctrinal class? It would be almost impossible at this point. And in that sense, I think generative AI is the next version of that – of the internet, of computers. I suspect Dyane and I agree on most of what’s happening; I just describe it as a natural progression.

Dyane O’Leary: Yeah, I think, Jonah, those are the two buckets that law schools deal with. When we say “legal tech in law school” – I get that phrase a lot (“What do you do? What does that mean? What’s the legal tech landscape?”) – there really are two lanes, and a lot of people conflate them: the technology of law, and the law of technology.

Right. So the law of technology, to me, is what Jonah described – kind of the evolution of cybersecurity classes, or emerging regulatory issues because of AI (like, do you need a warrant for someone’s Alexa device?). 

To me, the technology of law is more the how – how are you building or delivering your legal services? Whether you’re productizing it, whether you’re using one tab now instead of the eight tabs I had open when I was a law student – you know, now I’m researching, writing, and editing all in one tab, for example.

So those are the two buckets, and different things are happening in each. But the “skills” bucket – the technology-of-law side – is where I see something really novel happening, compared to anything that’s transformed what students do in that bucket before.

Jen Leonard: Interesting perspectives. I’d love to talk about novelty colliding with things we know really well – those things being the rules of the Bluebook. And I have to tell you, my first collision with the Bluebook happened right after I took 1L exams. I sat down to write for law review (and other journals), and I stood up after two hours and ended up walking out because I was like, if the reward for this is doing more of this, I don't want to do any of this. 

Right? So I was never on a journal, and that was the end of my relationship with the Bluebook, essentially. But you all stuck with it! And recently, you had an amazing post about new Bluebook Rule 18.3, which relates to citing outputs from generative AI. 

Can you talk a little bit about what 18.3 is and walk us through the mind-bending nature of trying to capture AI outputs — and your thoughts on the wisdom of that?

Dyane O’Leary: Yes, the Bluebook has become relevant these days again. I’m certainly not the first to have a critique of it. Let’s put it in context: the Bluebook has always been more of a scholarly-focused, academic-writing system, with a piece of practitioner writing. Courts may pay attention, but its roots really were and remain in that scholarly world.

Why does that matter? Because when we look at the new rule and you view it through that lens, you see the plagiarism concern. For years, scholars have talked about the difference between law school—where we’re so focused on attribution and avoiding plagiarism—and legal practice, where the second you get into practice it’s: copy, use that person’s standard-of-review paragraph, “plagiarize” everything with your legal hat on.

So 18.3… I always tell my kids something is better than nothing—do something; nothing is not—but here I don’t know that something was better than nothing. So 18.3 kind of punts at the beginning. A lot of those who read it think, “Oh, it’s going to tell me whether and when I should cite generative AI,” but it really doesn’t. It just assumes one would be citing it. Now, to me, that’s the first hurdle — should I cite it at all? I’ll give you my opinion: I don’t think you should. It skips that and just says when a writer is citing, and then gives a clunky citation format that has you saving screenshot captures of all your prompts.

So I’ll pause there and invite your listeners to think of your last interaction or prompt with AI. If you’re doing it well it’s not one sentence, it’s not one page. It’s a back-and-forth conversation. So when we think about how cumbersome and unrealistic that is — for there to be a saved PDF screenshot of that entire exchange — that’s the practical yellow flag.

Then there’s the ethical yellow flag: what’s in those prompts? Is there client information? Is that my work product? What are the potential ethical risks of disclosing it? And the burden of combing through it — am I redacting parts of my citation for privilege? You can get a lot of layers down there. So I invite folks to look at it. The commentary so far has been pretty strong that this is not workable — maybe in an academic sense, if we think gen AI outputs should be cited — but I don’t know any lawyer who’s embraced this as important.

And I’ll share what I plan to tell my students. It’s hard, because I want to focus on citation for all the good reasons, right? It’s your credibility, it’s your explanation to the reader. But I plan to tell my students that if you think you need to cite a generative AI output, that probably means you’re using the tool wrong. You’re not doing it the right way. It shouldn’t be a source that you feel like you need to cite to support your writing. So that’s where I think I’ll come out — basically telling them to turn that page and not really look at 18.3.

Jonah Perlin: I agree with Dyane. I think you have to read the Bluebook, not just in this context but generally, and ask: what question is it trying to answer?

When I read 18.3 for the first time, I assumed the question was: if I need to cite an output, what do I do? So, for example, let’s say there’s a dispute between two parties about the advice I gave them about a contract, and it goes to whether the contract was formed or how it was formed. They’d need to cite that as an exhibit, right? Then they would use this rule on how to cite that chat.

This is not a rule about how you came up with your brief or your answer. We don’t cite junior associates. We don’t cite Google searches. So to me, I wasn’t surprised or emotionally triggered by this rule, because I assume it’ll get used once in a blue moon. Others seem to have assumed something different, and that’s where I think the stress around this rule came from. Unless someone tells me otherwise, I’m going to assume that’s at least a reasonable way to read it.

Dyane O’Leary: The problem is, students crave that guidance. They’re new legal writers, new legal thinkers, and for them, when to cite an authority isn’t always clear-cut. That’s the biggest challenge — it leaves us to fill in that guidance. But I completely agree: it was left in no man’s land, and people are interpreting it differently.

Jonah Perlin: Right. But at that point, I agree with you — it’s not an authority, right? You only need to cite it in this way when you’re using it as an authority. And it comes back to, you know, what my students would hate me for saying — because I say it all the time: authority is the building block of all legal analysis. But how often are you actually citing generative AI as that authority? It’s a tool to get you to authority.

Bridget McCormack: That all sounds right to me. And I want to confess a couple of things: I haven’t thought about the Bluebook in, I don’t know, decades. The Michigan Supreme Court, very helpfully, had its own citation norms and rules — which is really helpful when you come practice there and have to learn a totally different way to cite everything in your briefs.

By the way, lawyers should figure all this out and just come up with one way. There should be a citation singularity — and I nominate you both to get us there. During your conversation, I started thinking: who writes the Bluebook? Who’s in charge of it? Do you all know this? It’s four law reviews: Harvard, Yale, Penn, and Columbia. So it’s law students coming up with it — which is a little bit insane.

One final question. You posted recently about the benefits of multimodality in large language models, and the ability to talk to your chatbot might be something lawyers could really use. Can you talk a little bit about what you’re thinking there, and how you think that function might be useful to lawyers?

Jonah Perlin: Yeah, totally. That post came out of some experiments I’ve been doing with a couple of really great, effective speech-to-text tools. I’ve basically been trying to use speech-to-text tools as long as they’ve existed, and I will tell you, when I was in law school and I was trying to use them, they were terrible. You had to learn all this lingo, and you had to train it on your pronunciation – it was not very helpful.

I’ve been so impressed by the ability now to just talk out an email and have it formatted properly, with names spelled correctly, and you don’t have to dictate your punctuation. I find that I speak very differently than I write, and sometimes I need both in order to get the best work. And so that’s a great opportunity these tools provide. But it made me think: there was a day and age when a lot of lawyers used speech-to-text – they just did it through another human being, right?

(My parents aren’t lawyers, but I remember as a kid my dad walking around with a Dictaphone, you know, narrating the journal of our vacation. And then he paid to have somebody type it up afterward.)

It opens up a whole new range of potential opportunities – some of which are sort of “what’s old is new again.” And now I just kind of wish that I had learned how to dictate, because it’s a totally different skill. Maybe somebody will create a really great class on how to dictate as a lawyer, because the tools are there now.

The last thing I’ll add on multimodality is: the cool part is, the old Dictaphone couldn’t talk back to you. And now the generative AI tools can talk back to you, and that creates an incredible opportunity to use the exact same tool but through a different medium. I’m honestly just excited about where that’s going. I think lawyers don’t realize how many opportunities the fact that this technology is text-driven but not exclusively text-based will provide.

Dyane O’Leary: And, Jonah, my students are doing so many interesting things with that. I had a student who told me she uploaded her Evidence outline and just engaged with it – quizzed herself on it, kind of used a chatbot as a tutor – which was amazing. I had my students listen to my syllabus through Google’s NotebookLM as if it were a podcast, and we talked about how receiving information that way is different from just reading a boring PDF syllabus.

So I think we talked about different pockets of legal education – like the academic support space. You know, I think academic support educators are doing a lot of great forward-thinking work on how this technology can be used as kind of the next-best personalized tutor in all these different modalities for students that, really, today learn wildly differently than we all did.

Jen Leonard: Yeah, I love that. I use voice mode for so many different things. Now that I work primarily out of my home – and I don’t have office mates – I find it so helpful to talk through things with a chatbot in voice mode, much more than writing. I know a lot of lawyers like to think through writing, but I’ve learned that’s not really how I process information. And to both of your points, I feel like if I’d had access to this as a law student, my entire educational experience would have been much different – if I could have engaged with the content through that format. So it’s wonderful for all of your students that they have such forward-looking professors who are thinking in innovative, experimental ways about how to support all of their learning.

October 14, 2025

Discover more

GPT-5 and the Future of Legal AI Regulation

Inside Legal Innovation: AI Adoption with Jae Um & Ilona Logvinova

Can Judges Use AI? Inside the Pennsylvania Supreme Court’s Interim Policy