Summary
In their first guest episode of AI and the Future of Law, hosts Jen Leonard and Bridget McCormack sit down with Jen Reeves and Chris Kercher from Quinn Emanuel, one of the world’s leading litigation firms, to explore how AI is transforming the way legal work gets done, not in theory, but in actual day-to-day practice.
From launching a grassroots “Skunk Works” initiative to integrating tools like Claude and ChatGPT into real litigation workflows, this conversation unpacks how a major firm has created an agile, AI-forward culture. Reeves and Kercher share how they’ve built internal momentum, balanced risk and experimentation, and empowered lawyers at every level to embrace AI as a thinking partner, not a shortcut.
This episode is packed with practical strategies and cultural insights for firms of all sizes, especially those wondering where to start or how to scale legal AI adoption.
Key Takeaways
- AI Culture Starts with Curiosity: Quinn Emanuel’s early AI success grew from a bottom-up “Skunk Works” model that encouraged experimentation without pressure or billing requirements.
- Leadership Matters: Partner-level buy-in, hands-on use, and intentional tool choices (like enterprise Claude) helped overcome early skepticism and accelerate firm-wide adoption.
- Context Beats Prompts: Real gains came from giving AI tools relevant case materials and letting lawyers guide usage, proving that deep context consistently delivers better results.
- Small Firms Can Compete: Solo and small firm lawyers can use public AI tools effectively by focusing on security, context, and strategic experimentation.
AI as a Thinking Partner: Whether planning litigation, creating client memos, or managing projects, AI is helping lawyers work smarter, faster, and in many cases, with more enjoyment.
Final Thoughts
This episode offers a compelling inside look at what legal AI adoption really looks like when it’s done right. Rather than treating AI as a top-down compliance issue, Quinn Emanuel has built a learning culture grounded in experimentation, autonomy, and creativity.
For legal professionals, tech leaders, and firm decision-makers, this conversation is a blueprint for where the industry is headed and a reminder that the firms moving fastest are often the ones willing to learn out loud.
Transcript
Introduction
Jen Leonard: Bridget, it’s so exciting to see you for our very first episode featuring guests. We had the great opportunity to talk with Jen Reeves and Chris Kercher from Quinn Emanuel, and we explored the many ways they’re using AI at the firm. Quinn is very AI-forward, and we saw how strong leadership – combined with putting tools in everyone’s hands and fostering an experimental, optimistic mindset – has led to a lot of interesting uses for AI.
Bridget McCormack: Quinn really seems to have a culture that allows AI experimentation to flourish. As a result, it seems we’re seeing a lot of happy, excited lawyers talking about the ways this technology is impacting their clients and their teams.
Jen Leonard: Hello, everyone, and welcome back to AI and the Future of Law, the podcast where Bridget and I connect the dots between what’s happening in the broader landscape of artificial intelligence and what’s happening in our own industry – the legal profession.
Today, we’re thrilled to be joined by two leaders in legal innovation: Chris Kercher and Jen Reeves of Quinn Emanuel, a powerhouse litigation firm. Chris is a partner at Quinn, and Jen is Lead Innovation Counsel. In that role, she’s leading generative AI initiatives at the firm. Welcome, Jen and Chris – thank you so much for being our first official guests on AI and the Future of Law.
Chris Kercher: Thrilled to be here, Jen and Bridget. Thanks for having us. We’re honored to be your first guests.
Jen Leonard: So, one segment we like to lead with is our AI Aha! segment. Generative AI is a general-purpose technology without an instruction manual, so we learn about it by trying different things. In this segment, we each share something we’ve been using generative AI for in our personal or professional lives that we find particularly interesting.
And our audience has heard Bridget and me use it in all sorts of weird and fascinating ways. We thought it would be fun for them to hear from each of you how you’ve been using AI in your lives. So, Jen, maybe you’ll kick us off – how have you been using AI in your life?
AI Aha! Moments: From Logistics to Creativity: Real Life with AI
Jen Reeves: Happy to! So, I was in the process of moving from New York to Houston a couple of weeks ago, and my fiancé and I decided it would be fun to drive. I had used an AI travel tool (Manus) initially to plan which cities we should stop in – we didn’t want to drive too many hours each day and needed to decide where to stay. It gave me a great itinerary as a starting point. But what ended up happening is I made real-time updates using ChatGPT. We’d be on the road, get too tired, and I could ask, “What’s a nice place to stay that won’t take us too far off track?” I gave it parameters like our budget and the fact that we wanted somewhere casual but nice. It was fantastic at finding places along our route that weren’t too far off the freeway but still provided a nice, restful stop. We did that for the whole trip down, and it was awesome.
Especially with restaurants – once ChatGPT’s new memory feature was in place and it knew what I liked, it became really easy. I could just ask, “What should we eat tonight?” or “Where should we stay?” and it would remember our preferences. It was fantastic.
Jen Leonard: Very cool. I’m glad you’re all settled in Houston, and glad that AI was helpful in the journey. Chris, how have you been using AI in your life recently?
Chris Kercher: [laughs] So, funnily enough – Jen and I did not coordinate this – my example is also about moving, though slightly different. We’re moving from Westchester, New York to Connecticut (just a few minutes away), and there’s a lot that goes into moving a family of six. My wife and I, who are both busy professionals, created a ChatGPT project for the move. We loaded in everything – the house inspection report, our vendor list, every email we got from anyone involved in the process.
It made an unbelievable punch list of things we never would’ve thought of – all the tasks you need to handle when leaving one house and moving into another. It reminded us about things like closing procedures for the new house, switching over utilities and state-specific requirements we have to pay attention to. It’s been really cool for that – basically acting like a super organized project manager for our move.
Jen Leonard: Great AI Aha!’s for the big life moments! Thank you both for inspiring us with those ideas. Now, of course, we’re here to talk all about AI in law. I’m going to turn it over to Bridget to get our conversation going.
Main Segment: How Quinn Emanuel is Building a Culture of AI Adoption
Bridget McCormack: It’s really exciting to talk to you two. It feels like you were some of the first lawyers – at least in big global firms like Quinn – who were all in, sleeves rolled up, figuring out how this technology might make a difference for the people you serve. We heard that from you very early on, and it’s been really fun to stay in touch with you.
For our listeners, starting at the beginning: Chris, tell us a little about what Quinn’s approach to using AI has been. What are the goals? How have you already seen an impact? And, to the extent you’re comfortable sharing, what does your roadmap look like?
Chris Kercher: Absolutely, Bridget – thank you. And thanks again for having us. You know, it’s been a very unconventional journey. Like much of what we do at Quinn, there wasn’t a lot of structure or bureaucracy to it. Sometimes that can be difficult because it’s not always clear what to do, but it also means you can just do things.
About a year ago, I started using AI tools as soon as they became available – it was clear to me this would be a useful skill to learn. I played around with them quite a bit. Then when Claude 3 came out about a year ago, I had my big “oh my god” moment. I realized this model Claude Opus can really write. I mean, it writes well and can edit well. So I reached out to our head of IT, John Stambelos, and basically said, “I don’t want to have to keep anonymizing client names or worrying about confidentiality – that really restricts what I can upload. Can we get an enterprise version, even on a pilot basis?”
He agreed – he was eager to find AI tools we could use. I think he was thrilled to hear I’d already found value in one! So I started to think about what it could do for us. I had a couple of notions (we’re still working through them). The roadmap was a bit fuzzy, but one idea was simply to put the tool in people’s hands. Let’s get our smart, talented, creative people using it like I am – experimenting, sharing information about what works, sharing pitfalls and techniques – and build some enthusiasm.
I observed that in other places, there was a much more conservative approach, with a lot of concern about client confidentiality and a ton of concern about hallucinations. And I said, hold on – let’s solve for those. Let’s make sure everything that leaves the firm with our name on it has been cite-checked and rigorously reviewed (as it’s supposed to be anyway). And let’s get an enterprise model so we’re not worrying about client data leakage. Then let’s get the tool to our people and encourage them: don’t think of it as cheating. Think of it as a new tool – why wouldn’t you use it?
And that approach has been incredibly successful. I think we’ve now got hundreds of people at the firm using AI. We started from the bottom up – we began with young associates, staff members, people I thought would be early tech adopters who’d use it responsibly and share their learnings.
We created something called the “QE Skunkworks” program – a nod to Lockheed Martin’s famous R&D Skunk Works. We had our own Slack group, our own Zoom meetings. There was no credit or billable time given; it was just, “If you want to learn about this, mess around with it, and share your findings, join in.” There was tons of enthusiasm, and it was really, really rewarding.
So that’s been a huge part of it: establishing a culture where people are thinking about AI, using AI, sharing with each other how they use it – and building that culture. It’s been great.
The other two aspects I considered: One, we need to figure out what tools are out there. There’s a ton of venture capital chasing this market. It seems like all the brilliant machine learning engineers from the FAANG companies got funding to build AI products and the VCs said, “Hey, try law – lawyers need AI.” And so a lot of products appeared that don’t really serve our needs. There’s a temptation for firms to just grab something off the shelf. I wanted to make sure we were smart about what we buy, and that we had enough bandwidth to look at everything carefully before adopting it.
The third piece of our plan is really thinking through what AI can do for our own data. Quinn Emanuel has a lot of information about itself and what we do, but historically it’s hard to access and hard to organize or classify. So we’ve started exploring whether we can use AI tools to better understand our own business and practices – to unearth institutional knowledge that’s been hard to tap into.
So that was our roadmap. What we needed, most of all, was someone to run this day-to-day – because I’m very much a practicing litigator. In fact, I was fortunate to be named “Litigator of the Week” in March for a big trial-court victory (in front of another Judge McCormack, coincidentally, down in Delaware). I was able to use AI in that case, which was really exciting – but of course I can’t do all these things all the time on top of my normal practice.
So last year, when I talked to John about getting Claude, I also asked if we could hire someone to run this operation full-time – someone to handle training, culture-building, evaluating new tools, etc. And that’s how we landed Jen Reeves. I’ll never know how we convinced her to join us, but hiring Jen has been the single most valuable thing we’ve done.
Jen has been a huge value-add. Having someone who is devoting every day to building enthusiasm, teaching people how to use these tools, getting people on board – and who genuinely loves doing it (I think she gets the same kick out of it that I do) – is such a treat. That’s become a primary pillar of our approach, as it turns out.
Bridget McCormack: That’s amazing. I don’t know if you realize it, but you’ve pretty much followed the roadmap that Professor Ethan Mollick has laid out in his research on organizational adoption of innovation – the whole “crowd, lab, leader” model. You have the crowd (everybody “in”), which I think is exactly the right approach because only the end-users know the best ways AI can impact their workflow. You have the Skunkworks lab team that’s really driving experimentation. And you have firm leadership not only supporting it but investing in it. I don’t know if you knew you were doing all that, but it’s remarkable!
Jen Reeves: We’re avid Ethan Mollick readers as well. When you kind of eat, sleep, and breathe The Innovator’s Handbook and those kinds of books, that philosophy seeps into your brain. So when you think about how to get a new technology adopted, you inevitably ask yourself, “What was the Aha! moment that clicked for me? What turned it on for me?”
And honestly, Chris deserves all the credit here for setting this up. I got to come in when the Skunkworks group was already in place – which was amazing. It was a cross-sectional group of the firm, not limited to busy lawyers working on high profile cases or uninterested people in leadership positions that were forced to use these tools. We got a really good mix of folks involved, and then I was able to build on that foundation.
I mean, I know, Chris, that you’re a bit of an “Ethan-holic.” [laughs]
Chris Kercher: [laughing] Yeah.
I’d just say it’s funny how learning works, right? It’s not like I sat down with an Ethan Mollick article and decided, “This is our plan.” But I’m not surprised we ended up following his advice, because I’ve been reading his work since the beginning. Sometimes I’ll go back to something Ethan wrote and realize, “Oh, that idea probably influenced me,” even if I didn’t know it at the time. It just sort of seeps into your brain and later comes out in practice.
And another person I follow is Andrej Karpathy – he used to be at Tesla and OpenAI and he posts a lot online. He’ll say things that I sometimes wonder, “Is that really obvious, or is that genius?” He has a lot of those insights. So between him and Ethan, I think their ideas have been floating around in my brain and then popping out months later in what we do.
In any case, the approach we’ve taken makes a ton of sense to me, and I think it’s working well – especially compared to a top-down approach where management mandates some AI tool without really understanding what the goal is or how the work is done.
I’d encourage other organizations: read Ethan Mollick, take his lessons to heart. We’ve had a lot of success with that kind of roadmap.
Jen Leonard: It’s so funny you mention that, Bridget. As Chris was talking, I literally wrote down “leader, lab, crowd” – you’re definitely following that roadmap.
The other thing you said, which struck me, was about solving problems rather than just noting them. Bridget and I have talked about this on the podcast: we listen to the Dwarkesh podcast that tends to be pretty technical, and on one of the episodes, two technologists from Anthropic were discussing the obstacles they hit. Every time they hit a roadblock, they kept saying, “So we solved for it,” and “We’re going to solve for this.”
And Chris, you basically said the same thing. You described going to John and saying, “We need to solve for this.”
When we visit firms or Bridget works with her team, we really notice this mindset difference. Some lawyers see a challenge – like, “There are hallucinations” – and their response is, “Well, that means we can’t use AI.” Others say, “There are hallucinations, so we need to solve for that so we can use AI safely.”
It seems like a simple shift, but it’s everything in terms of success. I thought it was interesting you used those exact words: solve for it.
Chris Kercher: Oh, yeah. And it’s such an unlock once you get past that fear. If you realize, “Wait a minute, we have to solve for that anyway,” then it all falls into place. In our field, someone should already be carefully cite-checking everything – not just checking the Bluebook formatting for a citation, but the actual substance. That’s what we’re paid to do; that’s what we vouch for in our work product.
So, junior attorneys – suddenly they become masters of reality in this process. They’re the ones making sure everything we say aligns with something real: case law, facts, what have you. And that’s huge. Culturally, having the mindset of “we’re not freaking out about potential problems; we’re addressing them sensibly, and then moving forward enthusiastically” is so important.
I can even sometimes tell when someone on the team has used Claude to help with a draft. That’s another thing you learn – identifying the little giveaways of AI use. But rather than viewing that negatively, we embrace it. It’s not about “Oh no, AI wrote this,” it’s about building a new culture.
For example, when I see a piece of work that’s well done and I suspect someone used Claude, I applaud them. I say, “This is great.” We cranked out a summary for a client on a Friday afternoon before a holiday weekend – something most firms wouldn’t have gotten to until the next week. But we did it; it was beautiful, informative, and correct. And it was both easier and higher quality.
So I find that the teams I’m on – and the ones I lead – are adopting AI really fast. We’re able to move faster and be more creative. Honestly, this is the most exciting time I’ve ever had practicing law. It’s great.
Jen Leonard: That’s fantastic. Now, I have a question for you, Jen, about the learning process at your firm. Something Bridget and I often see when we talk with various groups is that they really want to create a clear, step-by-step playbook – like a map everyone should follow – and they think that’s the key to success for the organization.
Do you think having one rigid approach for everyone is the right way to help an organization learn how to use AI, or is there a better way to teach people at an individual level?
Jen Reeves: Great question – and it’s something Chris and I have thought a lot about. I’m pretty proud of what we’ve accomplished so far, and I think the results speak for themselves in our engagement numbers. It’s one thing to get people signed up or logged in once; it’s another to see them continue to use the tools regularly.
To your point, I don’t think it’s about handing everyone a prescribed workflow. It’s not very effective to have people far removed from day-to-day practice dictate, “Here’s exactly how you shall use AI.” Instead, we let the practicing lawyers guide how AI gets implemented in their workflows. I always tell our attorneys: you are the subject-matter expert in your work. I’m here to show you how to use this tool and help brainstorm ideas, but you have to tell me what you’re working on. Then I can say, “Okay, here’s how Chris or I might apply AI in that situation.”
So, we often start by giving an overview session – showing them literally how to use the tool (how to upload a document, how to prompt it, etc.). But the real magic happens when they come back to us while working on an actual project. We tell them: it can be anything – AI can fit in anywhere. You don’t have to wait until you have the “perfect” use case that matches some checklist.
We’ve found that AI can help almost everywhere, and letting lawyers bring real, pressing problems to us and saying “How might AI help with this?” is incredibly effective.
Chris Kercher: I completely agree. It’s interesting – I’m not a technologist by training, and it can be really hard to teach this in the abstract. I can demo impressive outputs all day, but unless the question and the output mean something to the person watching, it’s just, “Oh neat, it can write a paragraph… but I don’t know if that answer is correct or useful.”
That’s why Jen’s approach – having people bring us their actual problems – is so important. When we work through a real issue with them, they can see the value directly. They might react with, “Oh, that suggestion is great!” or “Huh, that’s an interesting angle,” or even “Well, I already knew all that, so this isn’t adding much.” But in each case, they’re actively evaluating it on something concrete.
Bridget McCormack: And from what you both described, it sounds like Quinn may have had a cultural edge here. The firm’s background culture probably made it easier for you all to build this experiment and really let it take off – bringing real value to your clients and, it sounds like, to your lawyers’ day-to-day work. Listening to you two, it also seems like it’s more fun to practice law when you have a super-brilliant “partner” (AI) brainstorming with you on hard projects.
That background culture is so interesting. Jen and I give a lot of presentations to legal teams (in-house teams, law firms of all sizes), and every organization has its own starting DNA, its own culture. Quinn’s might have been especially well-positioned to capitalize on AI… or, Chris, maybe you just willed it into being! Likely a combination of both.
I’m curious: when you’re doing these internal consultations with colleagues, do you essentially say, “Bring me a real problem and let’s work on it together”? (I think that’s the best approach – sitting down with someone and brainstorming side by side with this extra-smart helper on your screen – what Andrej Karpathy calls a “secret human spirit” in the computer.) How do the lawyers react? Has every lawyer been blown away? Because across the profession we see a wide spectrum of reactions. Do you see that range at the firm as well, or has it been more uniform?
Chris Kercher: Yeah, that’s a great question – and it’s one I haven’t really talked about publicly before, though I’ve thought about it a lot. If I think back to a year ago, the legal community’s view of AI was very different from what it is now. Early on, I honestly didn’t know what would be accepted or how people would react.
I knew AI was important and worth focusing on, and I hoped I could pursue it at Quinn in a way that would thrive. I also knew we had the kind of culture where something like this could flourish, as long as I didn’t hamstring it with too many rules or committees. So my approach was, let’s just do it – get it into people’s hands and see what happens – while being mindful of the risks.
I’ll admit I had some sleepless nights initially, worrying that someone might, say, submit a court filing with an AI-generated hallucination in it. I imagined a partner calling me: “Why the hell are we using AI? I had no idea this was going on!” Because, to be fair, we didn’t have a top-down mandate or formal policy at first – we were just doing it organically.
That’s why Jen and I have been doing, and still do, a lot of one-on-one onboarding and training. We drill into our users: verify everything. An AI might present a case quote that looks perfectly legit – that’s by design, the model is predicting what a plausible quote would be – but it could be completely made up. And that’s insidious. I’ve been burned a few times in internal drafts where I thought I had a great quote or fact, and it turned out the model just fabricated it. You really have to double-check.
So early on, I was pretty nervous and very hands-on to ensure we caught those things. Once we got over that hump and put in place processes to mitigate those risks (and frankly, as people became more savvy about the limitations), I felt more comfortable. But I intentionally did not broadcast firm-wide, “Hey everyone, start using AI now!” in the beginning. Because I knew some people – not just at Quinn, but anywhere – have strong feelings against this stuff. I wanted to meet people where they were. I didn’t want to force it on anyone or spark a backlash. Our approach was: if someone is interested and wants to use AI, Jen and I are here to help them do it properly. If someone isn’t comfortable with it, that’s fine – we’re not shoving it down anyone’s throat.
We did quietly circulate a sort of update to my fellow partners with my thoughts (probably heavily influenced by Mollick and Karpathy, and even by things you two have said). It was like a “state of play” email, just to give leadership an idea of what we were doing and how we were handling it. We also set up an internal mailing list to share tips and developments. So there was an education component even for those not directly involved day-to-day.
But early on, yes, it was uncertain territory and we proceeded cautiously: “Let’s try this, but take a lot of precautions so that if something goes wrong, we can catch it and fix it going forward.”
Now, fast forward to today – we’ve gotten a ton of traction. Lots of people are on board, and the technology itself has advanced, so the outputs are even more impressive and reliable than a year ago. We recently started a ChatGPT pilot to complement our Claude pilot, so now we have many folks trying out both models at all levels of the firm. By and large, the response has been very enthusiastic.
Sure, a few people have voiced concerns along the way – that’s natural – but at this point I’m not seeing anyone actively evangelizing against AI use. The more common stance I encounter now is a sort of FOMO with a dash of guilt: lawyers saying, “I know I need to try this… I just haven’t had time.”
So one thing Jen and I are focusing on is how to make it easy for even the busy skeptics to dip their toes in – give them a quick, approachable win that doesn’t take much time.
And I think you both have mentioned this before: I personally use AI for almost anything now. The voice dictation feature alone, for example, is fantastic. I can just brain-dump whatever’s on my mind into it. I’ll say, “Here’s my to-do list” or “Help me plan this complex meeting agenda,” and it will structure it for me. That ability to just unload your thoughts and have the AI organize or respond is, in a word, revolutionary.
It’s great for us to share those little personal examples with people – those “Aha!” moments. And I love that you and Bridget do this too on your podcast and in talks, because that’s exactly what we aim to do internally. We want to sit down with someone and not only show them, say, how to summarize a deposition transcript, but more importantly get them to that moment of realization: “Oh, I don’t need to wait on someone else – I can just do this myself, with AI’s help.”
Jen Reeves: It’s been amazing to watch that change. When I first started pushing AI internally, I was almost marketing it: we had only a few licenses, and I was trying to drum up interest – “Hey guys, we have this tool, come check it out!” I don’t have to advertise anymore. Now I’m getting at least five requests a day (often more) from people who want access or have an idea, especially whenever we send a newsletter or one of us mentions something in a meeting.
People approach us like, “I saw so-and-so do this really cool thing with the AI – can I get in on that?” I literally keep a folder in my email of feedback and requests. It’s full of quotes like, “This is great, I heard good things, I need to get on this ASAP.”
So yes, I do think Quinn’s culture helped – as Chris said, we’re not strangling the innovation with a million rules about anonymization or redaction up front. If we had started with, “You must scrub every client detail and basically lobotomize the tool before using it,” I think a lot of folks would’ve thrown up their hands and said, “Eh, too much hassle. I’ll just do it the old way.” Instead, we addressed the risks in other ways (like the enterprise solution and training on verification) so that people can genuinely experiment and integrate AI without a huge overhead.
Our stance is basically: “This is a safe environment to experiment. Some ideas won’t work – that’s okay! You’ll learn even from the misfires.” We encourage trying stuff out (responsibly). It’s an experiment culture, with guardrails.
Jen Leonard: I love that. So we get a lot of inquiries from much smaller firms than Quinn. I’m talking about solo practitioners or firms with just a handful of lawyers. They don’t have the resources or in-house teams – or a dedicated “Jen Reeves” guiding them – but they are eager to leverage these tools to run their practices better.
They’ve actually asked us to pose this question to you: How should a small firm or solo lawyer approach using publicly available models like Claude or ChatGPT to get started?
Chris, maybe I’ll start with you. What advice would you give to, say, an experienced partner who’s now on their own or in a small shop and wants to start using AI in practice (not just for the business/admin side, but in legal work)?
Chris Kercher: Wow – great question. Honestly, I think it’s a fantastic time to be a solo or in a small firm, precisely because AI lets you leverage yourself so much more. You can amplify your cognitive abilities, your attention, your mental stamina – everything.
I would suggest going through a process similar to what we did at Quinn, but scaled to your situation. First, address the non-negotiables like client data and confidentiality. That’s paramount. If you’re dealing with sensitive information, you either want an enterprise solution or to use a model that guarantees it’s not training on your data (so you have a reasonable expectation of privacy). You need to ensure you’re not inadvertently waiving privilege or exposing client secrets.
Maybe that means investing in a paid plan or enterprise license of a model that offers data privacy, or using an open-source/offline model for highly sensitive stuff. Solve that piece first so you can use AI responsibly without fear of an ethical breach.
Second, recognize the potential pitfall that as a solo you don’t have a second pair of eyes on everything. That is a bit scary – we all rely on colleagues to sanity-check important work. But AI can partly fill that gap. You might not have an associate or junior to review your draft, but you can sort of “partner” with an AI to double-check things. Still, it’s on you as the attorney to be extra careful.
If you can swing it, maybe hire a part-time young lawyer or even a law student intern to be a second human checker for critical outputs – just like any quality control. But if not, then double down on your own verification steps.
After those precautions, I’d say: go for it, full speed. The beauty of being in a small practice is you’re nimble. There are a ton of resources online – many are free – but the truth is, a lot of your learning will be self-taught through experimentation.
And the fun part is, AI itself can teach you. The real unlock for me was treating the AI as a partner and asking it directly for help. Literally, I’d open Claude or GPT and say, “Hey Claude, here’s what I’m trying to do. How can you help me?”
You can have a conversation with the model about how it might assist. For example, outline your workflow or a task you often do and ask, “What parts of this could you handle?” You’ll often get a surprisingly thoughtful answer.
Ask it to help plan something – “Give me a game plan or punch list for X.” It’s like what I did for my move, but apply it to your law practice. It will help organize you, keep track of tasks, maybe even generate drafts for routine documents, while you supervise and edit.
It can be as simple as, “I need to draft a basic contract for Y. What should I consider?” Or, “Here’s a rough outline of an argument – can you help flesh it out?” Treat the model like a very eager junior associate who never gets tired. Of course, verify its output, but use it to get momentum.
Another piece of advice: context is key. This is something Karpathy (and others) emphasize – and I think Jen will back me up here – getting the right context into the model dramatically improves its output. I joke that nowadays “English is the hottest new programming language,” because prompting these models is kind of like programming in plain language. And as lawyers, we’re excellent at English communication, right? We craft arguments, tell stories, explain complex things clearly. Those skills translate directly into getting good results from AI.
So, the more relevant context you feed the model – the facts of your case, key documents, etc. – the better responses you’ll get. One practical tip: set up a “project” chat with the AI for each matter. For instance, in our practice we’ll have a Claude chat dedicated to a specific case. We’ll load it up with the core context – maybe the complaint, the answer, important deposition excerpts, whatever – so Claude has that all in its “brain” when we ask it for something on the case.
Even if you’re solo, you can do this. Maybe for a contract review, paste in the contract and any term sheet or related emails at the start of the chat. Or for litigation, give it the basic facts or a summary of the case. It can even summarize long docs for itself to stay within limits (e.g., “Claude, summarize this 30-page contract briefly, now use that summary plus this other info when answering my questions”).
Spending a bit of time to engineer the context up front pays off hugely. I’ve found that including more relevant info can almost magically elevate the quality of the output. Sometimes it’s subtle; sometimes it’s night-and-day. But the model will perform much better if it “knows” the specifics of your problem.
So if I were a solo starting out, I’d take one of my active matters, open an AI chat, and dump in the key context: “Here are the facts/background of this matter” or “Here’s a memo of what I’m working on.” Then start asking questions or giving it tasks related to that matter. You’ll likely be amazed at how much more insightful the AI’s help is when it has the details.
Even if you don’t know how to perfectly prompt at first, don’t worry about crafting some uber-prompt. Just communicate naturally. I tell people: don’t get hung up on “prompt engineering.” Just explain what you need in plain language, provide the context, and iterate. The AI is quite good at understanding intent if you give it information.
Jen Reeves: I’ll add to that: you can accomplish so much with just the general models that are out there (Claude, ChatGPT, etc.). That’s basically all we’re using at Quinn right now – we haven’t even needed fancy specialized legal AI tools yet, because the frontier models are that capable. So a small firm shouldn’t feel, “Oh, we lack the expensive, legal-specific software.” You don’t need it to start. Focus on leveraging the general AI with your own data and context, as Chris described – that’s the critical part.
What I tell people who feel overwhelmed about starting is: just throw your key documents into a Claude or GPT session and start interacting. For example, if you have a case file, maybe copy-paste in the complaint, a key contract, whatever defines the problem, and then chat with the AI about it. Now you and the AI are “operating in the same universe,” so to speak – it has the relevant facts and context in view. If you say, “These are the parties, these are the basic issues… now let’s brainstorm,” the AI will understand the scenario and provide much more tailored help.
And don’t worry about perfect prompts. I try not even to use the term “prompt engineering” with newcomers, except to tell them not to stress about it. It’s really more like having a conversation. Focus on clearly communicating what you’re looking for, and provide any information that might be relevant. The AI will do a lot of the heavy lifting in interpreting your request.
Bridget McCormack: This is a very different way for humans to work with software – it’s conversational. It might feel foreign for a few minutes, but it’s actually quite intuitive because it’s just English. There’s no special coding or syntax you have to learn to get started. I’ve onboarded people – even outside law – and at first they don’t quite “get” that they can just ask or tell the AI things like they would a person. But once they do, it’s like a superpower.
I’ll give a quick anecdote: I recently set up my brother with ChatGPT. He’s a screenwriter – totally different field. He was working on multiple projects and likes bouncing ideas off a partner. I basically said, “Oh, do I have a writing partner for you!” [laughs] We got him his own subscription and I told him to treat it like a collaborator.
At first, he kept asking me, “What if I want it to do X? How do I…?” And I said, “Ask your new AI friend! Just go ahead and ask GPT the question.” He had this moment of, “Oh, duh, I can just ask it, not ask you to ask it.” And once he started doing that, he was blown away. At one point he joked that ChatGPT is a better writing partner than I would be (it probably is, to be fair!). I told him, “Exactly – if you asked me these questions, I’d just turn around and ask GPT myself. So you might as well go straight to the source.”
The same applies in law. Once lawyers get used to this dynamic, it’s like having a tireless, always-available junior partner or assistant. It’s a huge competitive advantage and frankly a relief in many situations. You mentioned Andrej Karpathy calling it a “secret human spirit” in the computer – I often say it’s like an extension of yourself, as Chris said earlier. It’s always available, always polite (sometimes more patient than a human colleague might be!), and never needs sleep or a lunch break.
When you’re doing high-level, intensive work, having an ever-present thought partner can make that work much more enjoyable and less stressful. It doesn’t replace your own judgment or creativity, but it amplifies them.
Chris Kercher: And to your point, Bridget, this isn’t something completely alien to us. It’s not like coding. I took CS100 at Cornell—the only science I took. I took it senior year, pass/fail. I passed by the skin of my teeth and remembered nothing. I look at code and I don’t understand it.
One of the interesting things I’ve noticed—just as an aside—is that coders, people who program software, seem to be ahead of everyone else in how they’re using these tools. There’s Cursor and Claude Code, and I see these things referenced online. I have an intuition that maybe those techniques are not just for coders.
And so, I didn’t know what to do with that. So, we actually had my 15-year-old daughter, who’s our intern this summer—an AI intern. One of the first things I had her do was: “You’re taking coding in school. Help me understand what Claude Code and Cursor are doing, and how maybe we can use those to do the work we do.” Because there’s a lot of similarity, in terms of use of context, text, and grounding, and the creation of new language from existing language. So, that’s been very interesting.
But, you know, other things that have been really valuable—using it as kind of like a bookmark for the brain. I have these half-baked ideas. Probably, if you asked my friends, I’ve had more half-baked ideas than anyone. And some of them eventually get fully baked.
Jen Reeves is a great example of when one got fully baked—when I had this half-baked idea that we need someone to do this, and what would that look like? And then eventually it became a job description that I could give, and we could hire.
But now, anything—I can just dictate as I walk around, as I pace, or I go for a walk, or I’m commuting. Here are my ideas. Let’s just dump it. If I can’t go any further, maybe the AI can make some connections, and if I can’t, I’ll leave it for now. And if I have another thought that’s related, I can go back. And I don’t know—unless you’re like a really good journaler or something—I don’t know what the analogy would be. But it’s so obvious, and it works great.
And I also think that in terms of the programming of it—again, I’ve got to give Karpathy a citation here—“English is the hottest new programming language.” It’s true.
And who are some of the best communicators in the English language? Lawyers. At least, you know, my partners, my colleagues—I look at our great appellate writers and think, You guys are going to be amazing at this. Because if you can communicate really clearly to a human—a judge, or whoever—I have a feeling you’re going to be able to communicate well with the AI. That is night and day, when you can communicate clearly.
Jen Leonard: I think the earlier point about things that we’re doing now and working alongside AI that we can’t believe—we’re in this weird liminal period where you find yourself in pockets where people are not AI users.
I was on a panel planning call a few weeks ago. There were four of us, and I was clearly the only one who’s crossed this Rubicon. And the first 15 minutes of the call were like, “What do we think an audience would want to know about this topic?” And everybody stared into space.
And then people were like, “Would they want to talk about…?” You know, Bridget, I'm asking four different LLMs. And I just couldn’t believe that people were just sitting there, trying to use our tiny brains to go get ideas and bounce them out there. What a useless way to engage your brain. It’s not something that any of us will miss—like panel planning calls.
I wonder what you think, in five or ten years, we’ll look back on today—something that you’re doing in practice today—and say, I can’t believe we used to do X with our time. Thank goodness we no longer have to do X with our time, because AI does it now. Or we supervise AI to do it now.
Chris Kercher: You think about where AI is going and how we’re using it—and what may be surprising—and one maybe obvious notion that I’ve had, but I’m starting to think about where it goes: it kicks off the point you just made.
Think about it—ordinarily, without AI, people would’ve just made stuff up. “I guess people want X, so we’ll give them X.” I think we may reach a world where, forget agents and automating life… life may look relatively the same—except it’ll be better. Because communication should be so much easier now.
What our clients—business people—are doing… they’re sitting there with a business issue. I see so many business people—regular business people—who, if anything is within 100 miles of “legal,” they just don’t want anything to do with it. “I don’t know, legal’s got it. It deals with the contract. I don’t know.” But if you open up the ability of people to just get a little bit more clear in their thoughts, and be able to organize it better, suddenly they may come up with entirely new legal problems that never would’ve been raised.
My wife’s a pediatrician. And I think a lot of people sort of struggle to… the ability to communicate with a specialist who maybe is intimidating or knows more than you… you realize it’s actually not that complicated. Obviously the science is—and the real insides of it—but just sort of that surface-level ability to communicate is totally different. So I think in every walk of life, where people may have a block because they can’t fully articulate and communicate their intent, they can now use this little tool to clarify and amplify. And really get out what they want, so that our ideas are better. They’re more correct.
There’s more reliability in asking the AI, “What would people want?” versus us just making it up. We can still evaluate a list of options, and we have to. But at least we’re starting with a broader set of ideas that maybe is a little bit more all-encompassing. So I don’t know—I feel like everything is going to be better in some way, because we’re breaking down communication barriers, imagination barriers, and everything else. We’ll see how that works in practice.
Jen Reeves: I would add — similar to that point — here’s another great idea Chris had. You know those “pardon the interruption” emails (or RFI emails) that go out firm-wide? The ones where someone asks, “Do you know this person?” or “Does anybody have research on this?” Well, that’s a perfect opportunity where Chris or I will just run a quick Deep Research project. Basically, we use o3 to come up with a research prompt and pull out anything that might be helpful. We feed that into our Deep Research tool, and it comes back with an amazing dossier, which we forward to the partner who asked.
That’s been huge for showing people, “Oh, I didn’t know it could do that!” It’s super helpful. It helped me prepare — I had intelligent things to say to this GC that I wouldn’t necessarily have thought of on my own. So again, you’re still the partner doing the call, but it helps you prepare so much faster. We love Deep Research. It’s been a great tool. And why wouldn’t you prep that way right beforehand? Instead of having to Google things, read a bunch of articles, or ask someone else to do it for you, it just makes everything so easy.
Jen Leonard: Well, we like to lead with optimism. But if optimism doesn’t work, other lawyers should be fearful — because there are firms out there figuring out how to use these tools well. And these are great examples of things that seem small but can give you an advantage — they can really speed up your work and give your clients an edge.
Before we go, a lot of this era is about learning on the fly and learning really quickly (which Bridget and I actually think is a lot of fun). What is one resource that you’ve been using to keep up to speed on AI? Jen, let’s start with you. And Chris, you’ll get the final word before we have to say goodbye.
Jen Reeves: Definitely. Your podcast has been huge for me — especially on that road trip I just took! But I also really like the Hard Fork podcast, because it makes it easy to stay up to date and it’s entertaining as well. There are some great email newsletters that are good, too. Brainiacs is a good one. There are a few more legal-focused ones that have been helpful — they only take a few seconds to read. But podcasts have been really helpful for me, since I can multitask while I listen.
Chris Kercher: For me… it’s interesting. I worked on the acquisition of Twitter for Elon (who’s a client), and I think X (Twitter) is fascinating because all these brilliant AI minds are on there posting. Take Andrej Karpathy — his résumé is unbelievable, and he’s sharing ideas in real time. You also have people posting scientific papers and AI research. The amazing thing is, I can just download those papers and give them to Claude and say, “Explain this to me in plain English. How much of this is math versus insights about how to prompt or use AI?”
I think I’ve learned a lot by trying to understand where the research is and where things are going, and by seeing really creative uses on X. People like Ethan Mollick, Karpathy, and other thinkers — you can get the benefit of their ideas without being a scientist or part of that community. I really think that’s a great way to get smarter.
Jen Leonard: Bridget, you use X for that too, right?
Bridget McCormack: Yeah. I think that’s a great way to end. This is how we’re all learning, and it’s enough for now. The technology is changing so quickly that by following a few smart thought leaders who get to focus on it full-time (while the rest of us have our full-time jobs), we can take advantage of some great shortcuts. I often will just take Ethan Mollick’s word for the summary of a new academic study. I mean, I could read the academic study, but it would take me so long — it’s not the best use of my time. So following a few smart folks on social media (if you’re not on X, they’re all on LinkedIn as well) really works.
And it is amazing that podcasts are kind of the new education delivery device of this era — at least in my view. That’s how I’m getting information as well. You guys, this has been so much fun to have you on. This conversation is a great example of optimistic, positive thinking about how this technology is going to make not only better services for your clients, but — and what really came through for me — a more fun, more enjoyable practice. Like, a happier time at work. Because you have this new unlock that’s allowing you to do what you do best, more of the time.
It’s been such a fun conversation, and I’m really, really grateful to both of you for coming on — and grateful to Quinn for fostering such a great culture that allows this kind of experimentation.
Chris Kercher: Thrilled to be here — honored to be your first guests. And it’s really fun… As much as it’s fun to teach people who haven’t used these tools before, it’s even more fun to sit down with people who are using it and who get it. They’ve been on the other side and share a lot of the same experiences we’ve had. So, thank you.
Jen Leonard: Come back next year — our hope is we’ll have humanoid robots by then! Thank you both. And thanks to everybody out there for listening. We hope you’ll tune in next time for another edition of AI and the Future of Law.