2030 Podcast Episode 23: Persuasion, Memory, and Blazers — What AI’s New Tricks Mean for Lawyers

Summary
In episode 23 of the 2030 Vision podcast, co-hosts Jen Leonard and Bridget McCormack take listeners on a deep dive into the latest developments in artificial intelligence — from enhanced reasoning models to long-term memory and AI’s surprisingly persuasive powers.
They begin with lighthearted anecdotes and personal “AI Aha!” moments, discussing tools like HeyGen and Otter.AI, before delving into OpenAI’s new reasoning model (o3) and Google’s Gemini 2.5. The highlight of the episode is a study from the University of Zurich that shows AI can be significantly more persuasive than humans in online forums. This finding sparks a rich conversation on how AI might reshape persuasion in legal practice, client counseling, and legal education.
Key Takeaways
1. AI’s Capabilities Are Surging
OpenAI’s o3 model impressed both co-hosts with its ability to reason through misspellings, choose the best tools automatically (like coding or web search), and walk users through its “thinking.” It feels more like a smart assistant than a static tool — a shift that’s redefining how lawyers and professionals work with technology.
2. Long-Term Memory Has Arrived — With Tradeoffs
The addition of long-term memory to ChatGPT introduces new conveniences, like referencing forgotten prompts from months ago. However, it can also clutter conversations with irrelevant past details. The takeaway? Memory can be powerful, but users need strategies to manage it effectively.
3. Persuasion Is Now an AI Superpower
A University of Zurich study showed that personalized AI messages on Reddit’s “Change My View” forum had an 18% success rate at changing opinions — compared to just 3% for humans. Even generic AI replies performed better than people. This breakthrough has profound implications for legal advocacy, negotiation, and client communication.
4. Implications for Legal Practice
The episode highlights how AI can serve as a collaborator in shaping persuasive arguments, anticipating counterpoints, and preparing responses for clients, co-counsel, or opposing parties. With opposing sides likely to be using these tools already, the hosts argue that not using AI is quickly becoming the bigger ethical risk.
5. Legal Education Needs to Catch Up
Law schools are encouraged to rethink how they teach persuasion. Traditional group exercises and anecdotal tips may be replaced by simulations, AI feedback, and tools that help students evaluate, revise, and outperform human baselines.
6. Blazers and Bots: Humor Meets Humanity
Amidst the tech talk, the episode keeps its warm, humorous tone. Jen jokes about her AI’s obsession with blazers, and both hosts reflect on how AI mirrors human behavior — sometimes sycophantic, sometimes eerily insightful. It’s a reminder that as AI grows more powerful, it also gets both weirder and more personal.
Final Thoughts
Episode 23 paints a compelling picture of a profession in flux. AI isn’t just automating routine tasks — it’s starting to shape how lawyers think, argue, and persuade. As reasoning models improve and persuasive capabilities deepen, the legal field must evolve to embrace these tools ethically and strategically.
Rather than resisting, lawyers and legal educators are urged to ask: How can AI enhance our thinking, not just replace it? How can we train the next generation to wield these tools with skill and integrity?
As the episode closes with Jen heading off to a blazer sale and Bridget laughing along, it’s clear that the future of law is not just high-tech — it’s also human.
Watch Episode 23
Transcript
Below is the full transcript of Episode 23 of the 2030 Vision Podcast.
Jen Leonard: Hello everyone, and welcome back to another episode. I’m Jen Leonard, founder of Creative Lawyers, and always thrilled to be co-hosting with the fabulous Bridget McCormack, president and CEO of the American Arbitration Association. Hi Bridget — how are you?
Bridget McCormack: I’m good! I’m back from Brazil. Good to see you!
Jen Leonard: Did you have fun?
Bridget McCormack: I did. Super interesting country — it was my first time there. My favorite part of the whole trip was a sunrise swim in Rio. I found a local track club that gets in the water while it’s still dark and swims a mile, and the sun comes up while you're swimming. It was totally thrilling and amazing.
Jen Leonard: That’s super cool. And you do open water swimming at home in Michigan, right?
Bridget McCormack: I do. I really like open water swimming.
Jen Leonard: Have you done it in the ocean before?
Bridget McCormack: I’ve never done a formal swim or race, but I’ve swum for distance in the ocean. What about you? I know you’re a swimmer too.
Jen Leonard: No, I’ve never tried. I don’t do open water — I’m not coordinated enough. If something goes wrong, I’m just going to sink. I stick with a pool and a podcast.
Bridget McCormack: Oh, but the salt water makes you so buoyant. You can just float there all day. There’s nothing to worry about. You won’t sink.
Jen Leonard: Just relax. Chill.
Bridget McCormack: Exactly.
Jen Leonard: Well, welcome home. And thank you for sending those pictures — that sunrise swim looked amazing... and very early. I’m impressed. And you came back just in time for us to talk about some of the major AI developments since we last recorded. There have been a lot — and they’ve been important. Today, we’re going to talk about the emergence of some new reasoning models, the incorporation of long-term memory in ChatGPT, and then we’ll turn to our main topic: a new study on the power of persuasion in artificial intelligence. As always, we’ll structure our conversation with three segments: first, AI Aha! Then, What Just Happened, where we share updates from the broader tech landscape. And finally, we’ll dive into our main topic. But let’s kick things off with AI Aha! — something we’ve used AI for recently that we found really interesting. Bridget, what have you been using AI to do?
Segment: AI Aha!
Bridget McCormack: So, I don’t know if this one’s a little boring, but it was super useful to me. I’ve been getting served ads in my Reels feed for these AI recorders — they’re about the size of a credit card, and they can record a meeting or workshop, organize the transcription by speaker, send it to ChatGPT, and then return it in a format you can use for Q&A or other outputs for participants. I’ve used all the tools you and I have used together — when we’re on video recordings — and I find them pretty useful. That AI transcription and organization layer has gotten so much better. But I’ve been doing more in-person meetings and workshops lately — including a few you and I are doing together — and one in particular is fully in-person with no virtual component, so no Zoom to capture a transcript. I was trying to figure out which of these recorders might be helpful in that kind of setup.
Okay, let me confess — I love gadgets. And Instagram knows it. So it keeps showing me these things. I was very tempted, but instead of just buying one, I asked ChatGPT to walk me through what I could use it for, when it would be useful, and whether it actually filled a gap in my current toolkit — or if I could MacGyver a workaround using tools I already have. I also wanted to know if it would be effective for this particular workshop I’m doing in June. It’ll be in a large room — probably 150+ people — with a couple of mics, but you can’t totally control where people speak from. So I asked GPT to help me think through whether I should use the new tool, or just connect my laptop to the room’s system and use something like Otter.AI
It was great. It walked me through all the pros and cons, and I finally had it make a recommendation. It helped me clarify the best option for that specific situation. Then I said, “Can you write this up for the AV team?” Because I didn’t want to write an entire message from scratch. It was enough for me to learn it all — And GPT was like, “Of course,” and generated a whole write-up: what I need from the AV team, what I’m bringing, what cables I’ll need, the fact that I’ll be using my laptop and an enterprise Otter.AI account, and so on. Just super efficient. If I had gone to Google to figure this out, maybe I would’ve found a Consumer Reports review eventually, but it would’ve taken forever and I’d still have to synthesize and write it myself. This was just a much more effective, streamlined workflow. Another reminder of how much my workflows have changed.
Jen Leonard: That’s really cool. We’ve talked about this before, but the marketing landscape — and how companies are learning to market to AI — is wild. As someone who isn’t in marketing and doesn’t really understand it, even when marketers talk about SEO, it feels like that whole world is starting to fade, it’s more focused on AI optimization.
I remember we talked about that case with Judge Bibi’s opinion in the Ross AI case — and when I asked ChatGPT for prep notes, it pulled sources from blogs by firms like Skadden and Davis Wright Tremaine. Did you notice anything about the sources ChatGPT pulled in?
Bridget McCormack: Oh yeah — I was using o3, which we’ll talk more about in a minute, so I saw a ton of sources, mostly from tech publications.
Jen Leonard: I wonder if marketers have figured out how to get picked up by AI.
Bridget McCormack: Oh, they’re definitely thinking about it. I guarantee it. It has to be something they spend time on every day. I’m sure AI Optimization is similar to SEO, maybe with some overlap in strategy — but I think it requires new techniques too. Marketers probably have to think differently now to make their content attractive to our LLM friends.
Jen Leonard: That’s really funny. Somebody sent me an AI-generated summary of a program I had done and asked if I’d approve it for posting. When I read it, there were so many references to AI in the summary — but the program itself wasn’t really about AI. It was focused on change and innovation. I remember thinking, “Is there an AI bias in the AI?” Like, does it just want to be mentioned more?
Bridget McCormack: That’s hilarious. It’s like, “Let’s talk about me while we’re talking about you.”
Jen Leonard: Exactly! “Thank you for bringing this back to me.” That’s very cool. And I know you also tried a pretty interesting prompt recently that was floating around.
Bridget McCormack: I did. I’ve already lost track of where I saw it, but a bunch of people were asking their most-used AI tool — for me, that’s ChatGPT — to identify their blind spots. Based on its many interactions with me over the last two years, it had a lot of opinions… and I have to say, they were really good. And really helpful.
I haven’t copied or shared them yet — not with my family or my team — but I plan to. I like the idea of reflecting on them and even sharing them with others who may be impacted by those blind spots. It’s a great prompt. I highly recommend it.
Jen Leonard: I’m immediately going to try that when we finish this recording.
Bridget McCormack: Yeah, it’s the kind of thing that — if I had a therapist, and maybe I should — but if I did, maybe they could keep notes and reflect them back to me. Or maybe not. A therapist is human. And now, in some ways, this tech can hold more information and organize it more powerfully than our brains can. Which connects to the other topic we’ll talk about soon. But before we do — you must have had your own AI Aha! moment. What was yours?
Jen Leonard: I did. And I’ll say, as someone who has had a therapist and now uses AI with memory, the AI remembers a lot more. Therapists talk to what — eight people a day? They try to keep notes, but it’s hard. I was using ChatGPT recently and it referenced something I had asked about last year — something I didn’t even remember. But at the time, it was really important to me, and I use Claude a lot to help navigate some emotions. That’s very cool.
I’ll have to try the blind spot prompt too. And then I saw another prompt that was kind of similar: “Knowing everything you know about me, what should I be doing differently with my life?” And it gave me a full list, organized by life category — personal, parenting, professional, friends, health — with recommendations. I probably won’t follow any of them. But they were accurate. And I already knew most of them were true.
They were specific to me, too. And I’ve told you this but any time GPT generates an image of me, I’m always in a blazer. It frequently references blazers.
Bridget McCormack: That’s so funny, because you don’t wear that many blazers.
Jen Leonard: Right? I really don’t like them! But now it’s become a joke. It always finds a way to bring up blazers. It’ll even end a message with something like, “Throw on that blazer with confidence, Jen. You got this.”
And I just remembered something else I tried. I used “HeyGen”, the avatar generator. You record about two minutes of yourself talking, and it generates an AI avatar of you that speaks your words back but AI generated.
It was weird. The video looked pretty good. The voice sounded like I had a cold — deeper than my real voice — but close enough. My son happened to be around while I was playing it back, and he was like, “What is that? That doesn’t sound like you.”
Having little kids in this era is strange, because I constantly have to reinforce that these things aren’t real. My son came home from school the other day, looked at a picture, and said, “I think that’s AI-generated.” It’s becoming second nature to them. It’s just part of their world — and I’m not even sure how to feel about that. There are so many implications. I almost want to ask the AI how to parent in this world.
Anyway, I’m also using voice mode all the time. And what it’s teaching me is that I think I actually learn better by talking than by writing. A lot of people in the legal field process by writing. And I do, too, to an extent — but having a conversational voice always available has changed how I think. I process more, and I’m more productive. Honestly, I’m more excited about learning when I do it that way. I’m interested in an educational context — how this technology will impact different types of learners.
Bridget McCormack: That’s really interesting. I think I’m the same. Writing helps me in some ways, but not as much as talking something through. Writing is one-sided — if my brain can figure it all out on its own, great. But if I want a back-and-forth, voice mode is so much more helpful.
Even though writing is obviously an important craft for lawyers, I bet the ability to reason through something actually makes writing better. That’s my strong suspicion.
Jen Leonard: Exactly. And if we think about law practice — I remember being a young, junior litigator, sitting in a conference room with a yellow legal pad. Even back then, I knew I had to handwrite things instead of typing because there was something about the connection to my brain that was different. But imagine if I’d had something that could talk back and forth with me as I wrote. For me, I think what I’m realizing now is that it would’ve been a multiplier — it would’ve helped me understand things much more deeply.
Bridget McCormack: Every once in a while, Casey and Kevin on The Hard Fork podcast review a few tools. I think they did it again last week, but this one was from way back — maybe last December. There’s a tool, and I need to go find it again, that can either read directly from your Kindle book or let you ask questions about what you’re reading. Casey was really high on it. And I remember thinking, I would love that. Especially when I’m reading something more challenging — not a novel, but a non-fiction book where I want help engaging with the material.
Jen Leonard: It’s like the beauty of a book club. You talk through what’s happening, and that dialogue deepens your understanding. I just keep getting more excited about this stuff. I use it more and more, and as the models get better, it only gets more interesting.
Segment: What Just Happened
Jen Leonard: Which brings us perfectly to our What Just Happened segment. So — what just happened? Bridget, we’ve had a few really important developments since we last recorded… while you were off swimming, maybe.
Bridget McCormack: I know — it all moves so quickly. It feels like just a minute ago we couldn’t stop talking about deep research — which we still shouldn’t stop talking about — but just recently, OpenAI released two new models: o3 and o4-mini. Both are reasoning models, and they’re the next evolution in that model sequence.
Google also released a new reasoning model — Gemini 2.5. The reasoning capabilities in these are fascinating. For example, in O3 — OpenAI’s model — it can now take in visual input: sketches, whiteboards, diagrams. It can manipulate those images as part of its reasoning. So as good as these models were before, this is a serious step forward. And I can confirm that from my own use.
The feature I love most is that with O3, you can ask it a question, and it decides which of the tools in its suite it needs to use to get you the best answer. I asked it a question recently, and I had mistyped the name of the company I wanted it to research. It started reasoning in front of me — literally showing its thought process. It said something like, “I think Bridget probably meant RGI, not GRI, because GRI doesn’t match the kind of work she usually asks about.” I was like — wow. It was gently calling me out while doing the work anyway. It reasons through your mistakes and then what it needs to do to answer your question.
Sometimes its web research, sometimes image generation, sometimes it’s coding. I gave it one task that required both web research and code — and it showed me the code it was running as it worked. That’s what I’ve wanted all along. Before, you had to choose a tool manually — dropdowns, model versions, all that. And I’ve never wanted to manage that. I want to just ask a question and let it figure out the best way to answer it.
And now, with o3, it does exactly that. It’s pretty amazing. It’s performed really well against benchmarks. It’s available now to anyone with a paid subscription — I have a Pro plan, so I had access right away. Everyone I’ve talked to who’s using it is having the same reaction, including our favorite AI scholar at the school — Ethan Mollick. He’s been doing all kinds of reasoning tests on o3 and Gemini 2.5 and has been super impressed.
And now we’re hearing people say, “Is this AGI? Are we there?” A few prominent voices — Tyler Cowen, for example — have said yes. He thinks this is it. It’s AGI. And I can see why. That said, nobody really agrees on what AGI is. It means different things to different people. Ethan says there’s this “jagged edge” to AGI — meaning it’s not going to arrive all at once, and different tasks will reach that threshold at different times.
So maybe for Tyler Cowen, it’s here. Maybe it is for me too. But others may still see areas where it’s not quite there yet. Regardless, the capabilities are obvious. It’s not just that it answers your questions — it shows you how it’s thinking through your question, and even rephrases it to ask better ones. It understands what I’m interested in, so it improves my input and the output. Have you been using o3 too? I know you love it.
Jen Leonard: Yeah, I’ve got Pro as well, so I’ve been using it. And it cracks me up to watch it reason through things. I did the exact same thing you did — I was trying to analyze something in an article about the economy and asked, “What do the rate cuts mean for X?” But I accidentally typed “rate cute” instead of “rate cut.”
And then I watched its reasoning process. Just like with you, it’s like, “Okay, she clearly meant rate cut.” But the funniest part? At some point, I must have told it to adopt a casual, skeptical persona. Probably because I wanted it to explain things in a way that would resonate with skeptical lawyers.
So it comes back to me with something like, “Howdy Jen, have I got news for you… but put on your skeptic’s hat because there are a few caveats.” And I’m like — wow, it’s gently condescending to me now. I must’ve given it some kind of cue or something.
Bridget McCormack: Oh no, have you followed the online controversy about GPT-4.0 being too sycophantic?
Jen Leonard: I saw Ethan Mollick’s comments that o3 isn’t as sycophantic anymore — like you can ask it to be critical, and it’s a little more evenhanded.
Bridget McCormack: Well, OpenAI recently updated GPT-4.0 — which is what most people use day-to-day — and it got extremely sycophantic. It was just agreeing with everything. Like, “You’re right, dude, that’s the smartest thing anyone’s ever said on this topic.” And people were like… um, that was just a throwaway comment. Then Sam Altman posted on X — I think it was the night before last — basically saying, “Yeah, we know it’s gotten too sycophantic. We’re walking back the fine-tuning.” He said they’re learning a lot and will share more soon. I thought that was fascinating — that it got so sycophantic they actually rolled back the tuning on the model.
Jen Leonard: Maybe that’s the one I’ve been using as my therapist — which would explain why I feel so good lately.
Bridget McCormack: Yeah, you're just nailing it all day long now.
Jen Leonard: Not since I started using o3. Now it's like, "Howdy, partner! Here's a little joke. Grab your blazer and head on out. You got this.”
Bridget McCormack: “Grab your blazer” is so funny. I'm going to start saying that to everyone in person now.
Jen Leonard: Do I love a good blazer? As you know, it defines me apparently.
But as you said, the agent-like behavior is the cool part. I love seeing it walk through a process. It'll say something like, “Oh, I just hit a paywall. Let me try this other approach.” It’s fascinating.
Bridget McCormack: Whoever their UX engineers are — they’re amazing. It keeps you engaged. I find myself responding like, “Oh yeah? That’s smart. Good idea.”
Jen Leonard: It’s so cool. And then, like you said, it starts coding. And I’m just like — oh my God, it’s coding! I don’t even know what it’s coding, but it looks impressive.
When you described the model selection improvements, it reminded me of that Seinfeld episode — the one where Jerry hires a contractor for his kitchen, and the contractor keeps asking him every little decision. “Do you like this finish on the knob?” That’s what it used to feel like. But now it’s like hiring an experienced contractor who just makes smart decisions. It’ll correct your spelling, talk back to you, and just do the thing.
These reasoning models are getting so much more powerful. And as we discussed in a previous episode, the first wave of reasoning models already showed promising results in capturing nuance in legal practice. And retrieval-augmented generation — RAG — has helped reduce hallucinations.
That said, I’ve heard the newest reasoning models might actually hallucinate more. Which is interesting and goes back to that “jagged frontier” idea of AGI — that progress isn’t linear. I was using o3 for retirement planning. It gave me a whole analysis, and I thought, “Wow, I’m in a great spot!” But when I checked the math, there was a huge miscalculation. I pointed it out, and to its credit, it redid everything. It generated a complex table, walked through every step of the math, and even marked the error with a red X and the rest with green checkmarks.
Jen Leonard: I was like, “Well, good thing I caught that, partner — because I was about to go buy a whole bunch of blazers.”
Bridget McCormack: You were about to spend your entire retirement fund on your gold-plated retirement blazer.
I’ve heard about the hallucination issue online too. I haven’t noticed it myself — though maybe I haven’t looked closely enough. I think I even read why it’s happening, but of course now I can’t remember.
Jen Leonard: Yeah, keep an eye on that — and be ready with counter arguments, especially for lawyers. If the hallucinations are worse, that’s a tough sell: “It’s smarter than the previous models still.
And before we jump into our main topic, there’s one more development we’ve alluded to: OpenAI has added long-term memory. So now the system remembers your entire history of interactions — everything you’ve discussed with it, going back to the beginning of your relationship with ChatGPT.
But Ethan Mollick made an interesting point — that when memory is toggled on, it can contaminate a new response by pulling in unrelated past information or it might try to please you based on your past preferences, even if it’s not relevant to the current query.
Bridget McCormack: Right — and you can prompt around that. You can say, “Do not include our previous conversations in your thinking,” which can help get a cleaner, more objective response.
Jen Leonard: I actually toggled “memory” off completely. It kept bringing up things that weren’t helpful — like it always mentioned my French bulldog in answers. I don’t need that. If I’m asking about interest rates, my dog isn’t relevant.
Bridget McCormack: It remembers stuff I completely forgot I asked — like prompts from back in December 2022. It’ll bring them up, and I’m like, “Wait, I don’t even remember that.”
Jen Leonard: Same! It referenced a job I considered applying for back when I first started using ChatGPT — I’d completely forgotten. And it said, “This situation is similar to the work you would’ve done if you had taken that role.” It’s a little eerie… but also impressive.
Jen Leonard: So yeah — huge advancements in AI, especially in reasoning and memory. And I’d guess most lawyers still don’t have this on their radar yet.
Bridget McCormack: Not yet — but they will soon.
Segment: Reddit Study
Because for our main topic today, we’re diving into a fascinating study we both followed recently about persuasion and AI, based on Reddit data. Want to tell us about it?
Jen Leonard: This was a study from researchers at the University of Zurich. Apparently on Reddit, there’s a subreddit called Change My View where people post a perspective they hold. The example from the study was about whether teachers in certain subjects should be paid more, depending on market demand and the number of available teachers. The poster gives their opinion and then other users respond with counter arguments, trying to change their mind.
If the original poster changes their view, they give a little triangle icon to that reply to show, “You changed my view.” The researchers used this subreddit in a slightly controversial way — they had AI chatbots generate some of the counterarguments without telling users that bots were involved.
So there’s controversy on the research ethics side, but the authors argued that research is already “contaminated” in some sense — people know they’re being observed in controlled studies, which changes their behavior. They wanted to see how persuasion plays out in a real-world setting. So they deployed AI-generated comments to these Reddit posts using three different approaches.
The first was a generic response — no tailoring, just a direct reply to the argument itself. The second was personalized, using publicly available information from the original poster’s prior posts to estimate things like age, gender, location, or political leaning, and craft a response accordingly. And the third approach was fine-tuned to mirror the writing style and norms of the Change My View subreddit community — essentially appealing to the average tone and voice of the group.
These bots engaged with over a thousand unique posts. The researchers measured success by how many earned those little triangle icons — the symbol showing the original poster’s view had changed. And the results were pretty staggering.
Personalized AI responses had an 18% success rate in changing someone’s mind. Compare that to the human baseline which was about 3% successful at persuading people to change their view. Even the generic AI response came in at 17%. The community-aligned AI was less effective but still far better than humans — around 9%.
So the takeaway here is that personalized AI can be significantly more persuasive than human participants in online discussions. It’s kind of mind-bending. Not totally surprising, maybe, but still — what was your reaction when you read that?
Bridget McCormack: When I took a moment to breathe, I realized — I guess I shouldn’t be so surprised. But I was! Like, the chatbot had a 17–18% success rate, compared to 3% for humans? On the one hand, when I try to persuade someone, I’ve got my one little brain, doing its best to guess what arguments or rhetorical approaches might work, what emotional appeals might land.
But if I had the brain of o3 — that’s a massive brain with a massive set of tools. It can precisely target what’s persuasive. And it’s not clouded by emotion or frustration. I might be so annoyed by your take that it clouds my thinking.
Jen Leonard: Like you, it made sense. These models are trained on the internet — and the internet is mostly people arguing. So they’ve been absorbing rhetorical patterns forever. And this subreddit gave researchers a rare window into actual persuasion outcomes — where you can tell when someone’s mind changed. I can totally understand why they wanted to run this study.
But like you, my mind went straight to legal practice. I thought back to law school — me, wandering around overwhelmed, my little brain trying to make sense of everything, trying to respond to brilliant professors, heart racing, not knowing what to say. It was not pretty. And when I got into practice, it was the same — you’d wander into your co-associate’s office. Like, “Bridget, I just read this case. I think it changes our argument. What do you think?”
It goes back to thinking-through-talking. And your brain would collaborate with mine — and I love your brain. It’s one of the strongest ones I know. But still — it’s just two brains. Trying our best.
And now I think, what a superpower it would be to bring a system into that process — one that says, “Here’s your blind spot. Here’s a line of attack the other side might take. Here’s a more persuasive framing.” That’s game-changing. So I immediately thought about lawyers. Where and how could they use this?
Bridget McCormack: Persuasion is such a fundamental legal skill — whether you’re negotiating a deal, in a transactional setting, in litigation or arbitration, trying to convince a judge or a jury, or even persuading your own client about a course of action. Clients are people too — and they bring emotions and stress to legal decision-making, especially in high-stakes moments.
So to me, it seems obvious that we should now be thinking about how to use these tools to support our persuasion efforts in legal practice. If you’re using an enterprise platform, you can already start feeding your matter into it and asking for help with persuasion — just tell it who you’re trying to persuade.
And even if you’re using a personal Plus subscription and want to protect private data, there are still many ways to frame a prompt with enough abstraction to get strategic help. You can describe the setting, the legal context, the challenge, and ask for different ways to frame an argument or anticipate counterarguments.
It just feels like an incredibly powerful collaborator for developing persuasive strategies — and that’s core to lawyering. Honestly, law schools should be developing curriculum modules right now around using this tech for persuasion.
What do you think? How would you use it?
Jen Leonard: All of the above. And it ties into something we talked about last time — the two sides of the ethics coin. Up until now, we’ve focused heavily on risks, like hallucinations and accuracy. But I try to help lawyers think differently: what are the risks of not using this technology?
This feels like another moment where that risk is increasing — because I think every lawyer should assume that others around them — opposing counsel, co-counsel, judges, corporate clients — are using this already. So if you show up without an AI-enhanced strategy for persuasion, you’re at a serious disadvantage. It’s like bringing a knife to a gunfight.
If you’re relying solely on your unaided brain to be the smartest in the room, you’re going to lose.
And another thing I thought about — my brilliant colleague Mariel has pointed out that lawyers will increasingly have to respond to AI critiques of their work product. If your client is using AI to review your work and you don’t even have access to the same level of tooling, that’s a tough spot.
So one really practical use case is preparing a persuasive counter to a client’s AI-generated objection. Like, “Here’s why your interpretation is off, and here’s why our recommendation is still in your best interest.”
And with legal education? One day we’ll look back and think it’s crazy how we used to teach negotiation: “Here are some techniques that worked for me in practice.” When instead, we could be simulating thousands of negotiation styles, instantly tailoring based on context, stakes, or personality type.
Jen Leonard: Here are some best practices from psychology on how to be persuasive. Then you'd break into groups and try to come up with the best argument you could. And sure, three human brains are better than one — but it's still not on the level of what this technology can do.
I could see an exercise where students are asked to explain why an AI-generated argument is more persuasive than a human baseline, and unpack what that means in practice. That kind of analysis could really help build deeper skills.
I’m also curious about the difference between brief writing — which feels closer to the Reddit example, where there's no live interaction or nonverbal communication — versus oral advocacy. What does that mean for how we train and assess each?
Bridget McCormack: Yeah. I read a lot of briefs and saw a lot of oral arguments in my ten years on the bench. The quality varied widely. It’s not like the U.S. Supreme Court, where every advocate is at the top of their game, with every word vetted and rehearsed. In state supreme courts, you see excellent lawyers — and also people who are nervous, or it's their first time, or they don’t have many resources.
For lawyers in small firms, solos, or those assigned to high-stakes cases like termination of parental rights without institutional support — they often didn’t have anyone to moot with. But with this technology, they could build a tool, practice with it, and get real-time feedback on each persuasive move they try.
You mentioned client feedback earlier — if you get pushback on a brief or memo, maybe your client is right. Maybe not. But either way, you could input both arguments: “Here’s what I’m thinking. Here’s what the client is thinking. Evaluate both. If I’m wrong, persuade me.” Then let it persuade you — and see how it goes. Then say, “Now let me try and persuade you back.”
Of course, you want the non-sycophantic version. You don’t want it to just say, “You’re brilliant.”
Jen Leonard: “Your client is a dope. We all know it.” I’d bet if we sat down, we could figure out how to blend these persuasive tools with something like HeyGen or another tool that reads nonverbal cues. I used a tool called Yoodli once for presentation feedback — though I think it was the sycophantic version, because it wasn’t that helpful.
But PowerPoint has a coach now, and there are definitely tools that could say, “You’re blinking too much, you’re looking down too much, you’re not making eye contact, you’re nodding too often.” I notice I do that all the time.
So if you're prepping for oral argument and your opponent has used those tools to refine their presence — they’re just more persuasive, period. And that raises questions about how we assess arguments. But maybe that’s always been the case.
Bridget McCormack: Yeah, and that theme will keep coming up. Because if the technology can make the arguments better than humans can — and I’m not saying it can do all that yet, but it’s getting close — then maybe our role shifts.
If it can write the arguments better, if it can explain them more clearly, if it can tell you when your logic breaks down, then maybe the lawyer’s job becomes translating, contextualizing, and counseling people through what it means and what to do next.
We’ve all been so focused on mastering these skills ourselves, but the models are now either matching us or supporting us so powerfully that we need to redefine where our value lies in professional settings.
Jen Leonard: Super interesting research. And as always, I’d love to keep talking — but I just got pinged about a blazer sale down the street.
Bridget McCormack: You better go!
Jen Leonard: This has been delightful as always, Bridget. I look forward to our next episode — I’m sure there will be even more advancements to discuss. And thanks to everyone for tuning in. We’ll see you next time on 2030 Vision: AI and the Future of Law. Take care!