Summary
In this episode of 2030 Vision, Jen Leonard and Bridget McCormack delve into the transformative power of AI in justice and governance. Drawing from Dario Amodei’s visionary essay Machines of Love and Grace and Adam Unikowsky’s practical experiments with generative AI, they discuss how powerful AI is reshaping legal processes, reducing bias, and enhancing creativity. The conversation explores AI’s potential to unlock innovation, improve fairness, and drive societal progress while emphasizing the importance of ethical AI development and global collaboration.
Dario Amodei "Machines of Loving Grace" Essay
Adam Unikowsky Substack
Key Takeaways
- AI can resolve ambiguities in legal language, reducing litigation and improving clarity in laws and regulations.
- Powerful AI’s rapid advancements could lead to unprecedented breakthroughs across multiple disciplines.
- Judicial systems can use AI to promote impartiality, transparency, and monitoring of fundamental rights.
- AI’s ability to solve problems creatively can drive innovation and unlock new opportunities in law and governance.
- Public defenders and legislative offices can use AI to enhance efficiency and clarity in their work.
- Shifting from traditional SEO to AIO (AI Optimization) is changing how organizations reach audiences.
- Responsible AI development is essential to align technological progress with societal values.
- AI has the potential to redefine work roles, creating more impactful and fulfilling opportunities.
- Addressing human bias in judicial decision-making highlights the importance of transparent AI tools.
- Integrating AI into governance and justice systems can improve public trust and confidence.
Transcript
Jen Leonard: Hi, everyone, and welcome to this episode of 2030 Vision: AI and the Future of Law. I am your co-host, Jen Leonard, founder of Creative Lawyers, and I'm thrilled as always to be joined by the fabulous Bridget McCormack, President and CEO of the American Arbitration Association.
We record these podcasts every other week to try to help the legal profession keep pace with the relentless progress of artificial intelligence, and to connect the dots between what's happening in the broader tech landscape and the work that lawyers and legal professionals do throughout our profession.
Every week we have three different segments. We start with our AI Aha! Moments—those are the moments since our last recording when each of us has engaged with AI in ways we find magical. Then we jump into a couple of definitions. There's a whole host of terms we need to become familiar with to understand AI and to process some of the information out there. So, each episode, we share one or two definitions you might want to know about. And then we jump into the main topic.
Today, we're focusing on thought leadership as our main topic, with two leaders: one in the broader tech landscape, Dario Amodei—the founder of Anthropic, which makes the Claude AI interface; and the other is Adam Unikowsky, an appellate lawyer with Jenner & Block who writes a fantastic Substack about the future of law and all the issues we're talking about. So if you do nothing else after today's podcast, be sure to subscribe to Adam's Substack. You'll learn a lot, and he is incredibly persuasive, smart, and forward-thinking. We're going to dive into that, but first, we'll start with our AI Aha moments.
AI Aha! Moments: How Otter.AI and the Shift from SEO to AIO are Transforming Tech
Jen Leonard: Mine is a quick one, Bridget. This isn't actually my own AI Aha, but something I learned from presenting to a group of law firm partners. I asked them how they were using AI outside of their practice, and one of the partners shared that her father has dementia. She frequently goes to the doctor's office with him, but after the visit he can't recall the doctor's instructions.
So she and her father started using the Otter.ai app, which is a voice-to-text application. She has the doctor talk to her father's phone to generate a transcript, so he can later pull it up and remind himself what the doctor said. She can also pull it up and share that information with him. I thought that was a really interesting use case. Any AI Aha moments from you since our last recording?
Bridget McCormack: Yeah, I love that. I don't even think I have dementia, and I feel like that's a good use case for me! Sometimes I know I'm going to want to think more about something later, but I don't have time to listen carefully at the moment. Producing a transcript would be a great idea for those situations.
I had one this week that—I'm not sure it shows much creativity on my part—but it's a reminder of all the different things this technology is really useful for, and I'm not sure where else I would have gone for this kind of knowledge.
We're building a new website at the AAA-ICDR, and as you might imagine, some of the things that you maximized your website for historically—search engine optimization—are perhaps becoming less and less relevant as people change their habits and start using generative AI tools over traditional search tools. There's a new term; I think they just call it AIO instead of SEO. And it's not the same. The kinds of things you'd do to build a website that would be attractive to the frontier models—so they ingest the material, understand it, and can then supply it to people who are looking for the things you do—call for different ways of organizing information and talking about what you do.
So, first of all, I wanted to understand the background of both SEO and AIO. When I was a judge, if I had a case in a new area, I would find a law review article or treatise to understand the background against which I would have to figure out the specific question. I felt I had to do that background learning so I could be a better participant in the team calls we're having about this new site. So I first just did the traditional tutoring: “Teach me about SEO and AIO and what that historically meant for people who were trying to build a website that could reach the people they were interested in reaching.”
Then I had it ingest our current website and give me feedback about what changes we could make right now to make it a better site (even before we build a new one) for search engine optimization. It gave me a really detailed list of very specific things—none of which are very hard to do—things we could do immediately. Then I asked, using the same website, “What are the kinds of things you would do to increase it for AI optimization?” And again, it gave a really specific, detailed list. It was the best learning experience I've had in a new area.
It was such a wonderful way to learn; we've seen this kind of AI-assisted learning over and over, but that specific “go ingest this website and give me feedback” approach was pretty interesting.
Jen Leonard: I love that. And I'm grateful to you for introducing me to the term “AIO,” because I'm starting to hear conversations about marketing to the AI. We're going to be engaging with information differently than we have in the past, and I didn't have the language to talk about AI optimization. SEO stands for search engine optimization—that's for a Google-based world, right?
Bridget McCormack: Exactly. Sorry, maybe I should have spelled that out. If you're Googling, SEO is a whole ballgame. But if you're not, you really want to make sure that your content is understood by these frontier generative AI models so they can feed it back to people looking for the kinds of things that you do. That was super interesting—just a reminder that AI is a fantastic tutor for any new area you need to learn quickly.
Jen Leonard: We had another episode on solos and small firms, and it also seems that for those practitioners, this is an area where they could really maximize the investment of their resources in marketing their services.
Bridget McCormack: Absolutely. Yeah. For entrepreneurs and small businesses, I think it's another game changer.
Definitions: Chatbots vs Computer Use: Understanding the Next Frontier in AI Interaction
Bridget McCormack: So, definitions. Today we're going to start with something very topical: chatbots versus computer use. Right now, Claude is the only model that has a computer-use function, and it's in beta. Do you want to explain what we mean by chatbots—or both? Jen, I can jump in if that's useful.
Jen Leonard: Yeah, please jump in and help me, because I'm just starting to understand the second piece. Chatbots are probably familiar to most people listening. A chatbot is an interface where we ask a question of the AI and the AI responds to us. These aren't new to generative AI—we've been using chatbots on customer support websites, and they've been terrible until generative AI came along (for the most part). But ChatGPT, Claude, Gemini... they're all chatbot-based interfaces.
Lawyers are also using chatbot interfaces in their firms or practices as they test new tools. It's a familiar interface, which I think is what makes generative AI so easily adoptable. But it's important for lawyers—and even for myself—to understand that this is just one way of engaging with artificial intelligence. We're already starting to see a new wave of interfaces that will be infused into our physical environments, which is why I think AI will ultimately be more transformative than the internet.
We're already seeing some new offerings—the first from Claude, and I believe Google announced that they'll have an offering before the end of the year. But Bridget, could you tell us about computer use? What does that mean?
Bridget McCormack: I tried to figure out if I had access to this new way of using Anthropic's Claude, but I don't seem to. It doesn't sound like I simply missed it—I think only certain users have access at this point. I assume you read Ethan Mollick's piece; he had access to it early (he always gets access to the newest tools). I read his piece on this, so, as always, my knowledge is coming from Ethan Mollick—just recycling it for our listeners. But this new way of interacting with the technology, where the frontier model takes over your computer and carries out a task that you ask it to carry out.
That might mean it has to do some research; it has to go looking for information across different websites on the internet to figure out what it needs to know to take the steps needed to complete the task for you. It might have to go through however many steps are required to learn what it needs to do. II think in Ethan's case, he had it build a game. I think the idea is this is where we're headed—with asking it to buy you an airplane ticket or make a reservation at a hotel. Those are probably pretty simple asks of it. I think folks who code for a living are going to have much more use cases for this. And I saw over the weekend that Google is working on the same thing and will release it by the end of the year. I also read that OpenAI, of course, is also working on it. And I assume this is just the first step to agents, which we’ve been talking a little bit about and others have been talking about. Is that about how you understand what the computer use interface is?
Jen Leonard: I think so. I've been trying to sort of wrap my head around what it will mean when AI is inside of our computer systems. And I think you're right—we talked on another episode about agentism, which doesn’t have a clear definition yet, but it’s basically the idea of AI acting on your behalf without needing your direction at every step.
And I've been thinking about ChatGPT as agentic in some form now, because unlike at the beginning, it now goes out to the internet, does searches, and comes back with information—without me telling it where to go. But this is, like you said, another level: just giving it an overarching goal and stepping away. I think in Ethan's case, he talked about stepping away from the computer—it’s going to take a while—and you come back and it's still working. It’s still moving through the steps.
But it’s not asking you for feedback during the process. In Ethan's case, he had to redirect it a couple of times, but it didn't come back and ask, “Should I do this now? Would you like me to do this?” I'm still trying to wrap my head around what the implications are. And of course, we take a very optimistic view, but it’s a little bit wild to have AI unleashed inside your computer or your browser and have it do things without your engagement. But it does seem to be the next frontier, so to speak.
Bridget McCormack: And I think once you have that capability in your phone, that’s when adoption just grows tremendously, right? If you can ask your phone to monitor Delta flights to wherever and let you know when there’s a seat, or let you know if there’s a chance to get on an earlier flight—who’s not going to do that, right? I mean, at least, it’s hard to imagine.
Jen Leonard: I think that’s also why—and this is outside the scope of our conversation—but people are sort of down on Apple right now because they haven’t lit the world on fire with their AI offerings yet. I think they're making a smart play: when these systems become a little more refined and are integrated into something everyone already has in their pocket, they’ll leapfrog some of their competition in owning the AI space. Because it will be seamless, as Apple products are.
And like you said, I could see it doing all sorts of really useful things without you needing to direct it—on your phone.
Bridget McCormack: Which is Apple’s way, right? I mean, other big tech companies go first, they let them make mistakes, and then Apple waits until all the rough edges are smoothed out. Then they deploy it in this really user-friendly, easy-to-use kind of way.
But one of the things I read in a comment about this new computer-use function is that apps won’t matter anymore. Because if your computer is the one trying to find its way between different apps to take the steps it needs to take—whatever it’s trying to do for you—it doesn’t really care about their interface. It doesn’t care if they’re pretty. It doesn’t care if they’re super easy to use. It just wants to get its work done.
So that’ll be really interesting—if it really downgrades the app market. I don’t know if that’s right or wrong, but I read one commentator saying that.
Jen Leonard: And it’s funny—we’ve only had apps on a phone for about 15 years, but it already feels like that’s just the way it has to be. Just like Google has to be the way you optimize for people getting to your website. And it does seem like, in the next couple of years, the things we thought were sort of canonical about using technology are going to shift really quickly.
Main Topic: Machines of Love and Grace Explained: AI's Role in Governance and Law
Bridget McCormack: Yeah, exactly. Well, that's a great segue to Dario Amodei's essay—which I don't know if you or I saw first, but I know we both texted each other pretty quickly, late at night, saying, “Wow, we have to read this and we probably have to talk about it.” So why don't you get us into the conversation? Who is Dario Amodei?
Jen Leonard: Dario Amodei stands out to me as someone who is particularly thoughtful and almost hyper-educated in some of the foundational topics behind artificial intelligence (like neural networks). He was the former Vice President of Research at OpenAI.
He and some others left OpenAI to build Anthropic, which is the company that makes Claude—an AI interface that you and I use frequently, especially for writing. He has a really deep background in physics and in human neuroscience. He earned advanced degrees from Caltech, Princeton, Stanford, and the Stanford School of Medicine. He is a very intelligent, well-educated expert on physics and on the way the human mind works.
He's also worked at almost every major AI tech company in the world, including Baidu (the leading tech company in China), Google, OpenAI, and now Anthropic. He is a trusted source not only on the development of the technology and how it mimics the human mind, but also on the dangers of AI. That's why he stands out to me among tech leaders.
Dario is very concerned about some of the risks of generative AI on a society-wide scale, which is one of the reasons he formed Anthropic—to develop AI responsibly. He's testified before Congress about the dangers of AI. You'll remember that moment last fall when OpenAI's board of directors briefly fired Sam Altman before he was reinstated. Many suspect it was because they were concerned Sam was moving too quickly and recklessly in developing powerful AI. They really wanted Dario to come lead OpenAI; they reached out to him and he declined.
He's a very important, thoughtful, and smart person in the AI landscape. He recently wrote a really long piece (I think around 15,000 words) called "Machines of Loving Grace," which I kind of laughed at when I first read it. It sounds robotic, but it actually comes from a poem about living in a world where we coexist in harmony with machines. He wrote the piece because he'd developed a reputation for being a bit of an "AI Doomer." Kevin Roose of The New York Times wrote a profile of Anthropic last year and said it's like the place of AI doomerism—that it's the reverse of what you'd think the Silicon Valley mindset would be, because they're so concerned about what they're developing. So Dario wanted to write a piece that presented all the things he thinks are possible in a positive light because of powerful AI, and to underscore that AI has to be responsibly developed and thoughtfully shaped, or we will not be able to achieve those positive outcomes.
So just a few points from Amodei’s essay. We’ve talked about artificial general intelligence on this show, which is the term most leaders in the tech community use for an ill-defined point at which AI becomes better than most humans at most economically important tasks.
He says he prefers the term “powerful AI”. He defines powerful AI as a point at which AI performs at a level that’s above Nobel Prize winners across all disciplines, right? That’s how I understood his definition to be.
Bridget McCormack: That's exactly right. I found that fascinating, because being above Nobel Prize winners in every discipline is pretty powerful.
Jen Leonard: So, above Nobel Prize–winner level in all disciplines, across a sort of collection of technologies. And he and others believe that we are going to reach this point sooner than most predicted, because of the accelerating advancement of AI—likely in the next few years. You can debate the timelines, but the point of his essay is that when we reach this point, we’ll have the capacity to drive 100 years’ worth of progress on a host of different activities in the span of five to ten years, because of the extreme capabilities of these machines.
And one thing I want to call out before I go through the focus areas—because I found it interesting as someone who is really fascinated by human creativity—is that he makes the point that one of the major limitations on breakthroughs across many human frontiers is not our intelligence, but our ability to be creative with that intelligence and draw connections across things that might not be obvious.
And one of the reasons he’s hopeful we’ll make progress so quickly with AI is that it can be more creative and find different connections faster than we can. I think that’s interesting, not just because I love creativity, but also because one of the myths about AI is that it will never replace our creativity—that it’s this core human trait. When, in fact, it might actually be more creative than we are, and therefore more capable of unlocking new value.
Bridget McCormack: Yeah, I found that particular take so interesting. And he gives great examples of how humans had the ability to make some of the breakthroughs we eventually made decades before we actually made them. It was just that we didn’t have enough time or the sheer scale to think through every different way of approaching a problem. It just took humans two decades longer than it would have if an unlimited number of machines had been doing the same work. So it makes a lot of logical sense to me—it just wasn’t a way I had thought about progress having such an impact. He really convinced me.
Jen Leonard: Funny, too—I’ve been listening to the audiobook of Sapiens by Yuval Harari while walking back and forth with my son to school. And this morning, he was talking about the invention of the steam engine, and how long—I think it might have been over a century—it took between recognizing that steam had the capacity to set off tea kettles and pop lids off of pots, and realizing that that ability for steam to make things move could actually be used to move products or even people.
It made me think of Dario’s essay—like, an AI might have been able to figure that out the day it was asked to. It really underscores the point of how much progress we can make because of creativity.
Bridget McCormack: So, he addresses five different areas where he thinks we could see significant positive change in what might be a very short timeline—an extremely short timeline. And they’re all fascinating. First of all, just go read the essay—don’t only listen to us, go read it. It’s worth it.
But walk us through a little bit of what those areas were and what you were mostly drawn to
Jen Leonard: Given his background in neuroscience and medicine, it's not surprising that he's really interested in AI's applications in science. He starts the essay by discussing potential advancements in biology and physical health, and some ways AI can help us conquer and defeat illnesses.
He mentions Alzheimer’s in particular as a condition that we could potentially defeat in the not-so-distant future. And coupled with that, he talks about neuroscience and mental health. He has the most to say about those two topics because of his deep expertise. He forecasts that we might have the capacity to solve many conditions that we now consider chronic—both physical and mental. He even suggests we may be able to eradicate things like anxiety and depression, which is mind-blowing to me.
That’s because AI could give us a greater ability to understand the physical roots of mental health and tackle them more effectively—all of which would have benefits that ripple throughout society. We could spend weeks thinking about the implications.
He also acknowledges that the main limitations on those advancements are not in the AI itself, but in the physical world—things like vaccine trials and the time it takes to implement treatments. The bottlenecks will be logistical and regulatory, not technological.
And then he makes a prediction that we might reach what he calls “escape velocity” for people who are middle-aged—meaning we could potentially live to be 150 years old. For those of us who are currently middle-aged, I’ll admit, I was not that excited about that part. I don’t really want to live to be 150.
But I guess if that’s your goal, he thinks that could be possible with the advancements we might see from AI in just the next few years. I was most grateful to him for this section because it’s the part I understand the least—and it’s the part where he’s most expert. He then transitions to talk about other areas, and he’s very clear that these next ones are outside his core expertise. But he expresses optimism that we could also see meaningful progress in them, too.
He talks about economic development and the eradication or mitigation of poverty—both in the U.S. and globally. Then he moves on to peace and governance, which is where he discusses the legal system and some of the ideas we’ll explore more deeply in a moment. And finally, he touches on the future of work and meaning. That section is the least developed in the essay—he concedes he doesn’t have all the answers there—but he raises the possibility that we could find more meaning and more meaningful work in our future.
Personally, that’s the part I worry about the most: what the future of work will look like for people. But he starts the conversation with some optimistic visions about how those first four areas—health, neuroscience, poverty, and governance—might combine to create space for more fulfilling work and lives.
So he’s optimistic in the essay. He wants to make clear that this is the reason he does the work he does—because he believes we can unlock those advancements. But he also acknowledges that there are enormous challenges—mostly human, social, and political—that could prevent us from reaching those goals. That’s why he wanted to write the essay: so we can start being more thoughtful as a society.
He also emphasizes the need for international collaboration. He argues that liberal democracies should take the lead in shaping AI development to ensure that illiberal regimes aren’t the ones determining how these systems evolve. That part has generated some controversy—with some saying it could escalate an arms race or promote international conflict—but his core concern is about AI safety, responsible development, and aligning AI with human values.
And so, what we wanted to dig more deeply into—and connect with another leading thinker in our profession, someone you brought to my attention—is his portion on judicial systems.
Because we wanted to set the stage well for the second part of the conversation, we’re actually going to read from the essay—a couple of excerpts we think are particularly important for lawyers.
I’ll start with the part that stood out to me, Bridget. Dario writes in his section on peace and governance:
“For example, could AI improve our legal and judicial system by making decisions and processes more impartial? Today, people mostly worry in legal or judicial contexts that AI systems will be a cause of discrimination, and these worries are important and need to be defended against. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions—not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone.”
And Bridget, I know there was a part of his essay, given your background as an appellate judge, that stood out to you as particularly interesting for lawyers and judges.
Bridget McCormack: Yeah, and I’m going to read it in just one second. But right before that, he correctly—in my view—references the inherently subjective nature of decision-making by judges, arbitrators, and anyone who's making decisions in a legal context. Because so often, the standard by which the decision has to be made is infused with a really subjective term.
He uses “cruel and unusual punishment” as an example, but I immediately thought of all the times reasonableness was part of some standard. Reasonableness is just inherently squishy. What’s reasonable to me might not be reasonable to you. And smart, well-meaning people with different priors about a particular subject can genuinely have different understandings of what that means.
So it always felt to me that bias—in the way that we all have bias, just human bias, inherent bias—is part of judicial decision-making.
He says, "I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy real-world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making. Transparency would be important in any such system, and a mature science of AI could conceivably provide it. The training process for such systems could be extensively studied, and advanced interpretability techniques could be used to see inside the final model and assess it for hidden biases in a way that is simply not possible with humans. Such AI tools could also be used to monitor for violations of fundamental rights in a judicial or police context, making constitutions more self-enforcing."
And that's one of the great promises of this technology in the context of the rule of law: to the extent that it can bring more rigorous and also more understandable processes to bear, it really has the potential to grow public confidence in the justice system.
Jen Leonard: Even before we got the chance to work together—remember, years ago we did a podcast where we both talked about what happens when technology is only used in the private sector for legal purposes, and the public sector and courts don't get to take advantage of it (this was pre–gen-AI when we had that conversation). What would that do to the stability of our democracy if people have less and less confidence? They already have low confidence in the legal system and the judicial system.
We were both really taken with Dario's essay and the opportunities it highlights.
We're going to turn in a second to Adam Unikowsky's piece, but we were talking before the podcast about this idea. The biggest concern people raise when I talk about AI is, "AI is biased. It's flawed. We can't trust it." It's not that I disagree with that, but I'm amazed at how much we trust human beings—who are deeply flawed and deeply biased—to interpret the law. As you said, these are really subjective terms that have real consequences. Think about the famous studies of criminal sentences before and after lunch; I'm amazed at how unwilling people are to consider a world in which AI could be less biased than human beings are.
Bridget McCormack: Not only less biased, but with the ability for us to see what it's done and study it across many decisions—access we could never have with human decision-makers. So this is very exciting to me: the tools it might bring to bear on a pretty important operating system in our society. I'm excited about this.
Main Topic: Discussing Adam Unikowsky’s Approach to Generative AI in Law
Bridget McCormack: This connects to some of Adam Unikowsky's work, in particular his most recent piece. (Well, it's not his most recent piece overall, but it's his most recent AI-focused piece, on criminal sentencing. Adam writes about all kinds of interesting things—do follow his Substack, he's really worth it.) Adam is an appellate lawyer at Jenner & Block who's argued in the U.S. Supreme Court many times. He clerked on the U.S. Supreme Court. He's a very, very smart lawyer who also has a tech background of some sort—I forget exactly what, but he's definitely more of a technologist than I am. He's embraced this new technology in a way that we don't see widely across the legal profession, especially not in his corner of it, which is very litigation-focused.
Lawyers who handle the kinds of appeals Adam works on are really talented writers and oral advocates. Humans have always felt like a very important part of the appellate lawyering story—at least they always did to me as a former appellate judge.
I should note, Adam actually argued a case before me once when I was on the Michigan Supreme Court. He reminded me of that—I didn't remember it initially, but I looked back at the case and I agreed with his position. Anyway, in this piece (which I encourage everyone to read), he conducts a great study using an Eleventh Circuit case about a criminal sentencing enhancement. The case is United States v. De Leon. In that case, the defendant, De Leon, was convicted of armed robbery, and he was sentenced with an enhancement for—and I quote—"physically restraining the victim." The sentencing guidelines allowed the trial judge to enhance a defendant's sentence if the defendant had physically restrained the victim. In this case, there was no dispute that he held the victim at gunpoint and the victim did not try to get away (he could not move). But, as you might imagine, the term "physically restrained" might mean lots of different things.
It's an ambiguous term when applied in a case like this. So Adam, who's done other experiments using generative AI (and only frontier models—he uses Claude Sonnet as his favorite model; I'm sure he has access to legal-specific tools as well, but Claude is what he's using for these experiments, which I find fascinating), decided to use this case to see what Claude could do with this legal question.
Adam will be the first to say that the legal question here isn't complex—it's a single question, and there's limited precedent. In the Eleventh Circuit, I think there were three cases the judges had to follow. It wasn't the kind of case with lots of authority that's hard to manage or organize, so it's a relatively simple question. But it's the kind of question that appellate courts come out differently on all the time; in fact, there's a circuit split on this very issue of what "physically restrained" means.
Adam makes a few arguments in the piece, which I can summarize like this: If generative AI had been used when the sentencing enhancement was drafted, we might not be here—we'd have far fewer disputes, because AI can resolve an ambiguity at the front end. And he showed how that would work. In this case, he asked Claude to identify where the sentencing enhancement's language might cause ambiguity in hard cases, and then to suggest how someone writing the enhancement could clarify it to eliminate that ambiguity.
If you only read that much, it's fascinating. Basically, he makes the point: why wouldn't you do that with every regulation and every statute? Let's get rid of all of the litigation around poorly drafted statutes—which, as someone who decided a lot of questions about poorly drafted statutes, I feel is a lot of litigation.
But then he says, "Let's see what else Claude can do." So he asked Claude to write the opening brief in this case. He gave it a substantial prompt (though not an overly long one, since it's a straightforward question). He did provide the three cases the Eleventh Circuit had previously decided on this issue. He told Claude to be concise, and he emphasized that it was really important not to hallucinate—because this is important litigation. (It's funny that the AI actually behaves better if you remind it not to hallucinate!) Then he asked it to draft the opening brief.
He then shows the reader what that AI-drafted brief looks like, and he compares it to the actual brief that De Leon's lawyer filed on his behalf. I should mention, as you probably know, in many criminal appeals people have a right to counsel, and different jurisdictions meet that need in different ways. But in many criminal appeals, lawyers are assigned to represent people.
And those lawyers are often juggling many, many cases and don't have the kind of time to dedicate to one case that—let's be honest—Adam or his firm could. In his view (and frankly in mine), Claude's version of the brief was superior to the actual brief that was filed. It was just clearer. It could have been, you know, probably punchier (Adam himself says, "I probably could have made the intro punchier," and I'm sure he could have, because he's really good at this). But it was very straightforward, very clear, and it made the best argument.
He then—of course you see where this is going—had Claude do the same thing with the government's response brief. He compared those, and again concluded that Claude's brief was superior to the one actually filed by the government. To be fair to the government, they're managing many, many cases, and the lawyers handling these cases are often overwhelmed with the amount of work they have.
Finally, he had Claude write an en banc petition. (One of the judges in the actual opinion, in a concurrence, said something like, “We have to decide this case this way, but the full circuit should take it up en banc and figure it out, because it's a bit of a mess.”) So he had Claude write an en banc petition, and again it did a tremendous job.
All of this has really important implications for the administration of justice—both in the upstream sense (if we can identify ambiguities in regulations, statutes, and sentencing guidelines before there's a bunch of litigation, we could save lawyers' and judges' time for the harder questions that need humans to decide them) and in the downstream sense (if we could help overburdened court systems, public defender offices, and government offices get to better answers more quickly, that would be a tremendous advantage). But I've been talking a lot—what were your reactions, Jen, to Adam's piece?
Jen Leonard: I loved it. My initial reaction after reading the whole thing was that I'm going to bookmark it and send it to all my lawyer friends who argue that AI is not as capable of doing the kind of work that they do. Adam is so persuasive and thoughtful in outlining exactly how he used it, and he also provides a framework for thinking about how to use AI in different contexts.
One area where I hadn't really considered using AI in practice was for legislative attorneys. When I worked for the City of Philadelphia, some of our most thoughtful attorneys were our legislative attorneys. They would spend weeks debating whether to use one word or another word in a statute—what it might mean, what the downstream implications or unintended effects might be of using a particular phrase. Imagine speeding that process up by asking, “What are some of the unintended downstream effects of this language?” You could save weeks of work and a lot of taxpayer resources, frankly, and those attorneys could spend that time on other things.
I also wanted to note (I'm not sure if we mentioned this earlier) that in the De Leon case, Judge Newsom—whom we've talked about before—wrote a concurrence. He's been experimenting with AI as well, and in that concurrence he advocates for exactly what Adam is talking about: using AI to resolve inherently subjective language and provide greater clarity.
One sentence Adam wrote really made me laugh. I'll quote it here: “With essentially no guidance, AI can draft briefs that are better than the briefs drafted by the human lawyers in this case. This isn't intended as a criticism of the human lawyers, whose briefs were fine. It's just hard to compete with AI.” That made me laugh because so many arguments I hear now are the opposite—like, “AI cannot compete with us.” So hearing him flip that on its head made me giggle a little.
He also makes the point that anyone drafting regulations or laws should run their language through AI to detect potential ambiguities. And he says, “Why not do this? If you're the drafter of the document and you don't want to follow the suggestions, then just don't.” I loved that point. We're not saying we're handing this over to the AI entirely. It echoes Dario's point of “I'm not suggesting we literally replace judges.” We're just saying that we could use this as a really helpful tool. I thought it was great that Adam listed criteria we could use (I don't have them all in front of me) for deciding which cases to apply AI to in an appellate context—things like a low-complexity case where there's a circuit split, for example.
Bridget McCormack: He makes another point I love. He basically says: if you're worried that a particular dispute might not be a great one for AI—maybe it's too complex or has nuances you think the AI won't appreciate—you can just ask the AI whether it can do a good job. He shows how he does that and how the AI helps draw lines about what it thinks it can do well and what it probably wouldn't be the best for.
Jen Leonard: And I want to emphasize, I love that he told the AI (as you mentioned), “Don't hallucinate. This is a really important case and you cannot hallucinate.” That's the other big fear lawyers have—because of that infamous ChatGPT lawyer story. I think we're all getting more used to the idea that you can actually tell the AI not to do something, or ask it “How can I use you? How can you help me? And where won't you be good?”
The last thing I'll say is that the end of Adam's piece really resonated with me, because it tied back to a conversation I had with a very thoughtful lawyer friend earlier in the gen-AI era about writing and AI. Their position was that the value-add for them as a lawyer is that they provide an edge: “My writing has a tone to it that is persuasive and punchier than another lawyer would make it, which makes me a more effective lawyer for my clients than the average lawyer.” And I think that's actually true in our current system—if you have a lawyer who's a better writer or has more lively language, they might appeal more to judges (pardon the pun).
But Adam ended his piece by saying (and I'm paraphrasing): if there's a set of cases in which a lawyer's flowery language persuades the court to grant rehearing en banc, but wooden language would have failed, that strikes him as a failure rather than a success of the criminal justice system.
And if those cases don't exist—if the en banc court is disciplined enough in every case to set aside stylish rhetoric—then why not have Claude write the petition in 20 seconds? I just loved that summation of something I've been thinking but could not put into words. It's the perfect argument for why we shouldn't be proud of our flowery language "winning the day."
Bridget McCormack: Yeah, I mean, it’s another reminder that the real question isn’t, “How is this technology good for lawyers?” It’s, “How might it be good for the justice system?” If there are tremendous advances to be made in fields like medicine or neuroscience, we don’t hear doctors saying, “But is this going to be bad for the doctors?” That’s just not how that conversation goes.
And I think Adam’s example here makes a similar point: shouldn’t we be focused on the administration of justice? If that’s what we truly care about, then relying on one lawyer’s flowery language to create an advantage actually undermines that goal.
Jen Leonard: This has been one of my favorite conversations about AI because of the connections between two—well, three—brilliant thinkers, including you, Bridget, for bringing them all together (Dario, Adam, and you highlighting their work). I really like how this flipped a lot of the usual mindsets and arguments on their head, through the voice of a very talented appellate lawyer.
So thank you to Adam (and everyone should subscribe to his Substack). Thank you to Dario Amodei—who probably won't hear this podcast, but we're grateful that there's a leader thinking critically and thoughtfully about how we might shape a better future using AI. And thank you, as always, to you, Bridget, for surfacing this great content and giving us the chance to talk.
Thank you to everybody for joining us, and we look forward to seeing you on the next episode of 2030 Vision: AI and the Future of Law. Take care.