Artificial intelligence is moving beyond tools and into systems—reshaping how legal work is performed and delivered. In this episode, Jen Leonard and Bridget McCormack speak with Jason Barnwell, chief legal officer at Agiloft, about the rise of AI agents, the evolution of contract lifecycle management, and the implications for legal practice.
The conversation explores Barnwell’s experimentation with agentic workflows using tools like OpenClaw and Claude, including how AI systems can generate their own playbooks and execute work in parallel. He also explains how contract lifecycle management is evolving into a data layer that enables new forms of analysis, simulation, and operational insight.
Key Takeaways
AI agents change how work is produced: As the cost of generating and executing tasks declines, AI agentic systems can produce large volumes of work, creating new challenges around coordination, governance, and oversight.
From playbooks to protocols: Legal expertise is shifting from static playbooks toward structured protocols that can be reused, scaled, and executed across systems.
Contract data becomes usable at scale: CLM platforms are evolving into systems that make contract data searchable, analyzable, and actionable across the enterprise.
The lawyer’s role is evolving: With AI enabling parallel execution, lawyers increasingly define objectives, constraints, and review outputs rather than performing each task directly.
Adoption is driven by incentives: Client expectations, resource constraints, and internal performance pressures are accelerating the shift toward AI-enabled workflows.
Final Thoughts
AI is not simply making legal work more efficient—it is changing how that work is structured and delivered. As these systems mature, the ability to translate expertise into repeatable processes and guide AI-driven workflows will become central to legal practice.
Transcript
Jen Leonard: Hi, and welcome, everyone, to our newest episode of AI and the Future of Law, the podcast where we explore the interesting and rapidly changing dynamics of artificial intelligence and consider what they mean for legal practice, for our clients, and for the public. I’m your co-host, Jen Leonard, founder of Creative Lawyers, here in Philadelphia with Bridget McCormack, president and CEO of the American Arbitration Association.
Today, we are so excited to welcome one of our favorite thinkers in legal to talk about his journey, contract lifecycle management, and the future of law: Jason Barnwell.
Jason Barnwell: Thanks, y’all. I am delighted to be here. I’ve been looking forward to this conversation.
Bridget McCormack: I don’t know if you’re looking forward to it as much as we are, but we’re thrilled to have you.
Jen Leonard: Jason is the Chief Legal Officer at Agiloft and a former mechanical engineer, software developer, and attorney who spent years leading digital transformation inside Microsoft’s legal department before joining Agiloft as CLO.
We love following Jason because he doesn’t just think about the obvious applications of technology, process, and law. He brings deep systems thinking and frameworks that help us zoom out and understand what’s actually happening in the landscape.
Jason, thank you so much for joining us and sharing your expertise. As you know, we kick off every episode with an AI Aha!—something our guest has been using AI for recently that they find interesting, surprising, or compelling. We would love to hear yours.
AI Aha!
Jason Barnwell: A few weeks ago, one of my colleagues at Agiloft was like, “Have you had a chance to play with OpenClaw yet?” And I was like, “No, I’ve been CLO-ing a lot. What’s up?” And he was like, “Man, you’ve got to go play with this.”
So I’ve had a few weird AI Aha! Moments. A few weeks ago, I decided to go play with OpenClaw. Of course, it’s really cool, but it’s also not really a product, so you need to be very intentional about how you deploy it. I provisioned a virtual private server, sandboxed it, segmented permissions, and did all the things.
One of the clever things about it is that you can basically train in skills. It starts crafting markdown files internally that document how to do the thing. It basically creates its own playbook, and that’s how it starts to generate fairly rudimentary, repeatable skills.
That was one little “huh” moment for me, because it gives you a sense of what patterns are going to look like as we move into more agentic workflows. Instead of going through a challenging user experience, you can operate in the communication layer—I was running this through Telegram—and skill it in real time. That was wild.
Then I had something else happen that made me reorient around what the upper bound might be. I started using Claude—specifically Claude Code—as a harness to direct-inject applications into OpenClaw. Let me unpack that a little bit. One of my critical workloads, if we’re honest, is scheduling date night with my wife, because that is mission-critical. Time with my wife is the best. So anything that helps me land that makes life better.
I wanted to do more thoughtful things in OpenClaw. On a plane to Legalweek, with a decent wireless connection, I had this magical experience of needing a thing, being able to describe the thing, and then being able to direct-inject it into OpenClaw. It was a little more complicated than that, but I had it go live in about 15 minutes.
But then it got weirder, because I also had the moment of, “Oh—making software is still hard.” The thing I made worked pretty well within the narrow confines of what I’d cooked up, but it had not been fully smoke-tested. It hadn’t gone through all the user-acceptance testing you’d want. At a certain point, I got a message from my wife asking, “Why do I have hundreds of meeting invitations firing at me from some weird thing?”
And I was like, “Oh, right…I love you so deeply. I want to monopolize your calendar. I want a denial-of-service attack on your calendar.”
But one of the revelations there was—it’s one thing to have a tool you’re using in a single-player environment, where you have a lot of control and visibility and it’s wired directly into your preferences. It’s a very different thing to go into a multiplayer environment where you’re making things that need to work well—and kindly—with other people, other humans, and other machines.
Functionally, what I had done without realizing it was shove a whole lot of demand onto my spouse, not really thinking about the system that we’re operating in.
If you expand that out, it’s going to become very easy to generate demand with these tools. So as the cost of making something that works for me drops, we need to think much more broadly about the impacts inside an institution, inside a family, and inside a society. It’s going to be very easy to create punitive levels of engagement, asks, and interruptions at every layer.
So I don’t have a crisp distillation of the Aha! moment. But I feel like I went through a dramatic arc of: this is amazing, this is an incredible prototyping tool, and this gives me a glimpse of the future. And if you don’t think about doing this in a thoughtful, governed way, it is probably going to create a certain amount of havoc. But you’re guaranteed to learn something.
Bridget McCormack: Can I ask a couple of questions about the setup? Did you buy a Mac mini, and are you confining your OpenClaw setup to a computer that doesn’t have access to anything else about your life unless you specifically give it access?
Jason Barnwell: I almost bought a Mac mini, but then I remembered that I’m very cheap. So I got a $10-a-month virtual private server running somewhere in Oregon. For about $10 a month in compute,I created an M365 account for it so it can email, but it’s not mine; it’s its own thing. So it has reasonable input and output. I’ve given it constrained access. It can see my free-busy availability, but it can’t change my calendar.
I’m actually flipping around different models. That’s another Aha! moment for me. One of the really interesting things about using different models is that you get different levels of capability when it comes to fashioning new tools. As you use a model that’s more tool-capable, it can solve harder problems, because it can break what you’re trying to do into smaller intermediate steps and fashion its own tools to get through them. As you dial up the strength and power of the model, it can literally solve bigger problems for those reasons.
My weird observation is that people have very different engagement patterns with these new experiences. Let me unpack that. If you hand some people a pen and say, “This is a pen,” they say, “Great, it’s a pen,” and they pick it up and write with it. If you hand that same object to other people, they say, “Yes, it’s a pen—but if I break it into its functional capabilities, it’s also a rigid piece of plastic that can be used as a lever, a scribe, or a projectile.” They start turning the product into a tool that can make other tools.
Once you see that pattern, you realize it has a compounding, forward-looking value function. Tools can make better tools, which can make better tools. Suddenly, you’re no longer on a linear path. It starts to look exponential. I think that’s part of what we’re on right now.
My hypothesis is that some people are going to engage with these new tools and say, “Ah, a better pen,” and they’ll get an incremental benefit from a better pen. Another class of people is going to say, “Ah, a thing that is a pen and can do all these other things,” and they’re going to wind up on the exponential curve. I’m not sure how to get people to opt into that second view of the world. I can’t tell whether it’s habit, temperament, or something deeper. But if we can get people to have more of a dopamine response to, “What else can I do with this?” then the world starts to look a little different.
Jen Leonard: I love the point about the pen. It’s something our company works on with lawyers, and it’s something Bridget and I have been trying to get across in presentations. There are people like you and Bridget who are going to take this set of tools and run with it and build new things. But for the most part, in legal, I don’t know whether it’s personality traits, mindset, risk aversion, or the need for a detailed roadmap about how to use technology. It’s really hard to get lawyers to move away from, “This will help me draft a lease faster,” or, “This will help me summarize a court filing faster,” toward the bigger, bolder experiments you’re talking about—the ones that actually let you get ahead in your business and in the service you provide to clients. I don’t really have a question. It just aligns with everything we’ve been struggling with.
Bridget McCormack: I’d say that’s the exception in the rooms Jen and I speak to. The exception is the person who sees the pen and the 30 other things it can do. These are legal audiences, so that doesn’t exactly surprise me, but it does concern me a little bit. I want the legal profession in the game when it comes to figuring out what the future looks like, because law is still the operating system of our society. I want to make sure law stays connected in a way lawyers think is a good idea—but I am a little worried.
Culture, Incentives, and Why Legal Change May Happen Faster Than People Think
Jason Barnwell: I am worried too. And there are two things that shape human behavior at scale: culture and incentives.
If you think about the conventional path to success in our profession—and look, I’m going to generalize grossly and wave my hands a bit—our profession tends to select for, recognize, reward, and promote people who are really good at grinding. They take a known way of making value and turn that crank through as many wakeful hours as they can to produce high-quality outcomes, mostly in a known realm.
I think it is very hard to take people who have been very successful operating in that model and say, well, there’s a very different thing now. And if we’re being very direct, the crank you used to turn can now be turned by something else. Your highest and best purpose is to go build new cranks—to really take the lid off the value function of what we can provide.
By the way, I’m in violent agreement that law is the operating system for civil society. It is how we govern our interactions at massive scale. It’s a technology that evolved because, when we operated in tribes of 150 or fewer, we could police ourselves because everybody knew each other. Law gave us a way to operate at scale.
The funny thing is, the way we’ve designed it is around a scarcity model that now feels broken—obsolete, wrong, or at least no longer resonant. So the how of changing it is really challenging. I can give the bear case, and I can give the bull case.
Here’s the bear case: a lot of social systems move at roughly the speed of the people with the most power inside them. I once read an article about business schools that stuck with me. People go to business school and learn a new orthodoxy—a new way of thinking about a problem space. But for that new way to take hold in the market, they then have to survive a tournament that is still governed by people from the old school. And only after they survive that tournament and accumulate enough structural power can they institutionalize the new way of thinking.
That takes a long time. So the bear case here is: does this take a generation?
I don’t think so. And the reason I don’t think so is that, as the bar of access to these capabilities keeps dropping, competition at all levels starts changing very quickly. On the private commercial side especially, I think the real rate of change comes down to this: how fast are buyers of legal services willing to change their buying habits, and what kinds of substitutes are they willing to entertain?
Right now, we’re in this funny moment where everybody is sort of looking around and asking, “Are you going? Are you going?” And we’re starting to see signs of life—people saying, “Yeah, we’re going.” Once it becomes tasteful to make those changes, I think a lot of demand gets unlocked very quickly.
And the other thing I feel is: we are a resource-constrained enterprise. We have to deliver impact and value predicated on machine leverage, because the returns to capital increasingly demand that kind of leverage. My CFO is effectively saying, “Here is your envelope of resources.” In the past, maybe legal could get away with saying, “Trust me, I’m doing a good job over here. I’m managing risk.” But increasingly, any incremental resource in the enterprise is gated on a data-informed story about value delivery.
Does it create speed? Does it create scale? Does it create control—meaning governance or compliance? When I have a data-informed hypothesis about those things, I have a real chance to bid for incremental resources, bring them into the envelope, and deploy them well.
Those are the macro patterns I’m seeing that make me think the rate of change may not be predicated on somebody learning this in school, surviving the tournament, and eventually getting enough power to institutionalize it decades from now.
Legal Education and New Career Paths
Jen Leonard: I had a question about legal education, because we talk about it a lot on the podcast and think about it a lot. I was having a conversation the other day about what feels like the intuitive answer here: you mentioned that we’ve historically attracted people with personality traits that fit the old system. So do we need to start attracting different kinds of people to law school—people with entrepreneurial thinking, resilience, sociability, and a collaborative desire to solve problems?
That feels like a good answer. But how frustrated would one of those people be if they were selected and then plugged into a model that hasn’t changed at all? You mentioned incentives and culture, and the culture piece feels so problematic and so saturated across the profession that I’m not sure how we bring new thinkers into the fold.
Jason Barnwell: I don’t know, because the incentive models for the academics who really do control law schools feel very far away from the things we’re talking about. I think the things that delight them are poorly aligned with the value function we’re discussing right now.
So then how might the market adjust itself in response? One possibility is that a meaningful portion of the people who would have gone down the traditional path instead start trying to solve these interesting problems in very different ways.
Another possibility is that you start to see a smaller cohort of people who think differently about the value function of practice enjoy fantastic, visible success—and that success starts to accrue status. One of the biggest challenges we’ve had is that the kinds of things that delight me and excite me have historically not translated into: “That’s what great success looks like, and I want to do that.” It’s been more like: “That’s interesting. Good for him.”
But if that alternative path starts producing visible success, status, and relevance, institutions will take more notice, because they do want to remain relevant. And if they start to regard that other path as something that attracts more resources, creates more impact, and helps them do the things they care about, then it starts to feel more native to them.
But I think you’re putting your finger on a hard problem, and I share the concern.
Jen Leonard: I am seeing a little bit of the shift you’re talking about in postgraduate signaling. The law students I worked with—and Bridget, I don’t know whether you saw the same thing—were very risk-averse in their career journeys. You didn’t want to leave a prestige firm unless you were going to another prestige firm or to a really valuable role in a general counsel’s office.
Now I’m seeing my alums go to Harvey and to really interesting legal tech startups. And then I see their classmates on LinkedIn responding, because I think it opens up curiosity for them. If I’m not boxed into this one path that I’m not necessarily enjoying, and this really impressive person is showing me another path, maybe that’s where some of the shift begins.
Bridget McCormack: I feel like I’m seeing that as well. Even in our own hiring, because we’re doing so much building with AI, we now have needs for these crossover roles—roles we’re almost making up names for—that sit between legal and engineering.
And we’re getting really amazing candidates who are interested in those roles, even though you could imagine that being viewed as a risky career move ten years ago, probably five years ago, maybe even two years ago. But now there’s a lot of interest from highly qualified candidates who could definitely stay on the traditional path—or maybe stay on it for a little while, unless Jason’s bull case is right, in which case maybe they’re smart to start looking at other options.
But we should talk a little bit about Agiloft, Jason. Maybe everybody listening knows everything about contract lifecycle management—but maybe not. So tell us what listeners need to understand about it, why AI matters for it, and what difference it’s going to make for what you all do.
What Contract Lifecycle Management Actually Covers
Jason Barnwell: I think it’s fair to presume not everybody knows about contract lifecycle management.
But everybody has engaged with a contract at some point in life. At some point, you’ve had an agreement, you’ve made a commitment, somebody has committed something to you, and then you documented that in some kind of format—hopefully. It doesn’t have to be formal in every context, but in most commercial and business settings, we do write these things down and document the benefits of the bargain.
That is part of the process. If we think about the lifecycle of a contract, there’s the part where we bargain for the benefits we want and the gives and the takes. Then, at some point, we codify that into our private agreement. Then there’s the performance phase—where you do what you said you were going to do, and I do what I said I was going to do. And if somebody doesn’t do what they said they were going to do, then you need enforcement and other mechanisms.
So there really is a lifecycle that is fairly well understood. What we make is a software platform that helps move contracts through that entire lifecycle.
Over the last couple of years, we’ve made investments to start bringing AI into those experiences. You’ll hear us talk about “AI on the inside.” What that means is that, rather than bolting AI on as an afterthought, we have AI experiences that take advantage of the rich information already inside our system. So when you use Ask AI and say, “Go find me that contract—what does it say?”, it has the benefit of all that scaffolding, all those fixtures, all that structured context. So you get higher accuracy and better results.
The other thing I’d note is that we’re increasingly trying to give customers ways to unlock the value of what’s in their contracts. We have a product called Screens, and basically it makes it very easy to do contract analytics. And I realize “contract analytics” can sound a little opaque, so let’s unpack that.
There’s a change we’d like to make—one that we have deep conviction is going to benefit our customers. We think, this is going to be great. You’re going to love this. It’s going to delight you. But of course, we have lots and lots of customers, and they’ve negotiated all kinds of bells and whistles into their agreements.
So one of the things we literally did yesterday—Bennett, my head of legal operations, who is a wizard, a complete mad scientist—was say, “I think we can actually run a simulation on this and figure out who we’d need to talk to about it.”
And in almost no time, he used Agiloft to run that question across our customer base and identify which customers might have thoughts and feelings about this, and we should think about engaging them on this topic.
That was something that would have been incredibly hard a few years ago—now it’s Bennett doing a thing on a Wednesday in half an hour. You have those spark moments where you realize: oh, we can do this. We have the things we need to do this right now. And it’s so much more approachable than people realize.
I talk to friends and colleagues practicing in other places, and they’re like, “Man, I’ve got this EU Data Act issue and I’m not sure how I’m going to drive compliance.” And I’m thinking to myself: those things are still hard, because the world is complex.
But the nature of our practice is about to radically change, because we now have tools that basically put a superhero cape on our backs and let us do amazing things.
When I was an associate, that would have been me, a spreadsheet, and bankers’ boxes in some horrific, soul-crushing exercise. Now it’s like a magic wand. You get the wand, you say, “I want to do this,” and you can. And it’s wonderful.
So yes, we make CLM. We use it to solve very real, concrete problems that businesses have. And it gets more powerful and cooler every day. And we’re about to start lighting up agentic capabilities that are going to take all of that magic and give our customers the ability to do it at even greater scale and with greater precision and leverage.
Because as we start building these agentic frameworks, and as we gain the ability to run work in parallel, that’s one of the major unlocks. Right—so you’ve had the experience of spinning up a task or objective, whatever you want to call it, and then letting it grind away over there while you go do other things.
Bridget McCormack: Exactly. You go do other things over here.
Jen Leonard: Sometimes I think it’s just doing that to make me feel better, because the questions are so silly. I’m like, “Claude, are you just trying to make me feel like I’m involved in this process? You know the answer.”
Jason Barnwell: I have a similar hypothesis. It’s basically theater for us. It’s funny when it starts imitating Dan Ariely’s decoy effect.. It’s like, “Here are three options. One of them is obvious. What would you like to do?” And I’m like, yes, please do the obvious.
Jen Leonard: It is “Collaboration theater”. It knows.
Jason Barnwell: Can we coin that right now? “Collaboration theater.”
Bridget McCormack: One hundred percent. #collaborationtheater. How do we get that trending?
Jen Leonard: It’s like when my nine-year-old wants to help me cook something in the kitchen, and it’s not actually helpful to have her do the thing. So I’m like, “Here, you can fold the napkins.” She’s like, “I’m engaged.” That’s me and Claude Cowork.
Jason Barnwell: But it is absolutely true: you’re having the experience of seeing your capabilities amplified because you have all this stuff happening in parallel. And as we turn that on more fully, it’s going to be really interesting to see how people adapt.
Because what that really does is reposition the human. You really do become the architect. What you’re trying to do is create the conditions for these parallel efforts to deliver outcomes that align with the objective, operate within the constraints, and match the value function you define.
It’s a little like what we used to do when we managed associates. You’re trying to give them a mission. You’re trying to give them enough information about where you want it to land, the boundaries to stay within, and some useful elements about what you already know. It’s oddly similar to that.
That’s where I think it’s going to be interesting to see how the skill set evolves for us—especially for people whose excellence and expertise have mostly lived inside their heads. One of my hypotheses is that the people who are going to get the most value out of these emerging experiences are the people who can engage in the metacognitive process of cracking open the heuristics they’ve earned over time and effectively decompiling them into explicit instructions that can be handed off to one of these magic boxes.
And what’s interesting is that there’s going to be this amazing inversion. In the past, not doing that benefited you, because you had the secret. Nobody knew how to do the thing you knew how to do.
That made sense in a scarcity environment. But now that we’re in something more like a machine-intelligence-plenty environment, the ability to unlock that heuristic, put it into something else, and turn it into a crank that can be turned by something else to create value for somebody else—that’s a very different thing.
Bridget McCormack: We talk about that a lot. Not only do you now have this whole new set of cranks that can be working if you’re able to unlock that markdown file from your brain. It’s also a way to transmit knowledge more broadly across a new group of professionals that you still need to raise up at some point. Maybe we’ll never need another lawyer, but I actually think we will. They’re just going to do different things. And to grow them up, it’s actually an exciting model. I’m not sure anybody is focused on this part of it, because everybody is building for the most lucrative use cases right now. But I do think there’s an education model or training that we could be building that could be really exciting, with the same unlock you’re talking about.
Jason Barnwell: One thing I’ve been really insistent about is that we need to turn our known work—the work where we know the type of it and we know what to do with it—into declared protocols.
To your point, as we start to have a catalog of declared protocols, anybody who comes into the team can have that Matrix experience of: you can shove the knowledge of how to do this thing into my practice toolkit much faster.
And here’s where it gets super weird. One of the things I’ve been talking to folks about is using Claude and other tools to start creating simulations. They’re using them to create a virtual set of circumstances and run water through the pipes of scenarios. When I started practice, I was an emerging companies and venture capital attorney, basically doing transactions, securities, and general lawyer handyman work. The rate of learning for me was fundamentally constrained by whatever client work came in that was suitable for me to do. When the surf was up, great—I was getting lots of reps and lots of experience. When the surf was down, it was very tough to learn, because there just wasn’t enough work to sharpen my lawyer knife on.
One of the things that could be different now—and it might be a very different way of training people into our practice and our profession—is that we start running them through simulations of deals, disputes, regulatory inquiries, whatever it might be. These tools can generate synthetic scenarios trained on the kinds of things that happen in the real world. Then they can give people feedback in near real time: you chose to do this, here are other options you could have considered, here are the trade-offs. And that gets even more interesting once the protocols for your specific context are documented. That’s interesting as a general purpose idea.
The conventional way we’ve gone about creating scale in our practice is by creating playbooks, which are often rule-focused declarations of heuristics. If a commercial request comes in for a customer paper and asks for X, do Y or Z. And that works really, really well. I want to be very clear: I’m not denigrating that. I love playbooks. They work great. They are magic.
The challenge is that you only have coverage on the things you’ve already seen and had time to reduce to practice and document. That’s what’s in your playbook. As the world gets weirder and more interesting, the set of variations coming in is probably going to expand in both breadth and volume. So even if we have fairly mature playbooks, we’re going to find all these gaps and holes in our coverage.
How do we think about that? We actually already have a way to do it—we just don’t usually keep it front of mind. Every organization has a strategy, which is basically its theory of how it creates value in the world and justifies its right to exist and ask for other people’s help. Then, at some level, there’s almost a constitution you could declare: here’s what we do, here’s why we exist, these are the things that matter to us, these are our values, this is what’s important.
That’s a layer of knowledge that is often very tacit and not expressed very often, but it’s there.
A click below that are specific objectives that flow from it, often focused on things that matter historically and prospectively. We might call those policies. And protocols often dock nicely under a policy umbrella.
This is literally an experiment we’re starting to run internally, so this is the wettest of paint. But one of our theories is that if you start defining your information space with those kinds of characteristics, you can do some really interesting things. You can drive better alignment across the system. You can start seeing where things don’t connect well or don’t cohere.
And if something comes in and hits one of your void spaces at the protocol level, maybe you can at least have a starting theory of how to think about it based on the layers stacked above. That makes it easier for humans to contextualize what’s happening and then sharpen it: no, here’s what we would want to do with this specific case. As we build out the catalog of protocols, the material to the left and right of a gap may also help buttress the inference of what should happen there.
So what I’m really getting at is this: if you architect your information system with a forward-thinking structure and a really good taxonomy and ontology that help with pattern matching, it may be the case that the machines become very useful in identifying the gaps you have and suggesting a starting theory for what belongs there. Just as importantly, it gives the human architect a better way to traverse an expanding information space that is going to be increasingly hard to hold in our heads. If we can start thinking about how to architect that so a human has a management and control structure that can scale, then I think very good things will happen.
Bridget McCormack: I actually think it’s a super exciting and promising future for bringing up the next generation of the profession, which is maybe the question we get more than any other. People see no other way, as if the way we’ve done it for all these years was perfect..by osmosis..by looking through documents..we really learned how to be strategists. It went well.
There are obviously other professions where they train people to do really tricky things without just letting them practice by doing it. You don’t send astronauts on practice missions and say, “Good luck out there.” There are other ways of training professionals to do things.
Jen Leonard: Not only that, but I had to split-screen this in my own early career. I worked in a law firm in the traditional model, and I worked in a municipal law department in a very untraditional model. My husband called it “emergency room lawyering.” You just took whatever came through the door and figured it out. At the same level of experience, I learned leaps and bounds more in the second situation.
We had the ability to experiment because we had no other option, and we had psychological safety because nobody was going to get fired for making a mistake. That’s very different from a law firm, where even the tiniest, most rote task was somehow both the most boring thing I could be doing and the most stressful thing I could be doing, because the weight of the partner’s expectations and the client relationship was on my shoulders even though I had no idea what I was doing.
Jason Barnwell: Giving people the ability to experiment in a high-variety environment is one of the fastest ways to learn. And in a high-dynamism space—which is what we’re in now—that’s probably going to be more valuable than the old conventional approaches, which were really good at refining from 98 to 99 percent good, but less good at creating broader coverage, breadth, and spread across someone’s practice. I think you just gave a perfect capsule example of how to do it.
Jen Leonard: I feel good about myself. Thank you. You’re like Claude Cowork. I feel like we’ve collaborated and I said something of value.
We’re so delighted to talk to you, Jason. I think you’re one of the most interesting minds in legal. I feel smarter and dumber every time I talk to you. But thank you so much for spending your time with us. We really appreciate it.
Jason Barnwell: You both are an absolute delight. I could not be happier that you’re advancing the conversation and helping us all get smarter. So thank you, thank you, thank you.
Jen Leonard: Thank you. And thanks to everybody out there who tuned in to listen to this episode of AI and the Future of Law. We look forward to seeing you on the next edition. Take care.