Leading with AI: Strategies, Legal Insights, and Defining Key Terms

 

Summary

In Episode 2, co-hosts Jen Leonard and Bridget McCormack explore how lawyers can lead effectively in a time of rapid AI development. They begin with personal “GenAI moments” that illustrate how human-like—and surprisingly helpful—AI agents are becoming in everyday life. The episode then turns toward recent legal tech developments (like the vLex–iManage partnership), core conceptual frameworks (e.g., the jagged frontier of AI capability), and practical questions from legal professionals.

They emphasize that navigating the AI landscape requires legal organizations to embrace curiosity, create strategic frameworks, and include voices from all levels—especially junior lawyers. The hosts also introduce useful resources and reflect on what legal professionals must do to stay ahead of this fast-moving technology.

Key Takeaways

  • The “Jagged Frontier” of AI Capabilities: AI excels at some tasks and fails surprisingly at others—often unpredictably. Lawyers must test use cases broadly and repeatedly rather than dismiss the tech after a single bad result.
  • Secret Cyborgs Are Already in Your Office: Whether firms acknowledge it or not, many lawyers (especially juniors) are already using GenAI tools informally. This poses greater risks than safe, guided adoption.
  • Generational Gaps Shape AI Adoption: Legal teams span four or five generations, each with different comfort levels with tech. Open conversations and inclusive training are essential to bridge the “uneven distribution of AI knowledge.”
  • Juniors Must Step Up—and Leaders Must Listen: Junior lawyers can spot powerful use cases early due to their workflow exposure. Firms that involve them in AI strategy signal innovation and attract top talent.
  • Leaders Need Strategic, Adaptable Frameworks: Rather than chasing every AI trend, legal leaders should ask:
    • What should we stop doing?
    • What will we soon be able to do?
    • How can we scale or democratize our services?
    • How do we move work upmarket?

Transcript

Jen Leonard: So welcome back, everybody, to the second episode of 2030 Vision: AI and the Future of Law. I am your co-host, Jen Leonard, and I’m thrilled to be joined by Bridget McCormack, the President and CEO of the American Arbitration Association. We’re excited to create this project to help accelerate AI literacy in the legal profession. And Bridget and I thought it would be fun—because we're constantly playing around with these technologies—to kick off our show today by talking about our "Gen AI moments." Those are moments where we've been playing with tools (maybe ones we've been using for a while), and then we do something with the tools that feels especially magical or different from what we've experienced before. So, Bridget, I would love if you would lead our conversation by sharing your most recent Gen AI moment.

Gen AI Moments

Bridget McCormack: Oh Gosh, I might start with my first one and then I'll build up as we go along. I was thinking about when I first started using the technology and I thought, Wow, this is going to start occupying a lot of my time and brain space. And it was pretty early into my ChatGPT subscription—which I confess I got, I think, the day it was available. Just from following people on Twitter, I figured I had to try it. I was sort of doing some pretty basic prompts and thinking, Okay, yeah, this is kind of fun. It's nice to have an answer all put together instead of pulling from a bunch of different websites. But my husband and I were traveling to northern Michigan, to a particular county that has great vineyards and also great hard cider places. And I wanted to figure out if we could get to all of the hard cider places in one day and in what order. And I asked it, and it wrote me the most beautiful plan for how to do that and where to go and when to go there.

But it included one cider place that was in a neighboring county. So I wrote back and said, You know, this is pretty good, but this one place is not actually in Leelanau County. I asked about Leelanau County. I wanted to see how it reacted. And it was so polite and kind in its response to me. It apologized. It said, "I'm sorry, I'll try and do better. You're right." It told me the county it was in and amended its answer. And I was like, Wow, this is actually like a thought partner. I'm actually going to be able to have a conversation back and forth. It's not only going to give me information; it's going to work with me. And that's when I thought, This is going to have so many applications in my work life and life-life that it's time for me to roll up my sleeves and learn everything I can about how we got here and where we're headed. I don't know... have you had one recently that's been fun?

Jen Leonard: I have, but I want to ask you a question about yours first. I love that you started with the most important task for generative technology: finding hard cider! Were there places that it put on the list that you weren't familiar with?

Bridget McCormack: Yeah, there were—and I thought I was familiar with most of them. So in part I was asking just to see, because I thought I could test its chops. But it turns out there were two that I was not familiar with, that I got to discover. I did not have my mind changed about my favorites based on visiting them, but I like to have my favorites reinforced by doing more thorough research. So I really appreciate my friend ChatGPT for helping me with that.

Jen Leonard: Well, I celebrate your very specific version of reinforcement learning. It’s something I could get behind. I'm also curious, because the GPT technology was so polite to you—and I've been thinking about this a little bit recently, and somebody mentioned it the other day—are you polite in response when you engage with the tools? Like, do you say please and thank you? And do you think we'll continue doing that? And what does it mean?

Bridget McCormack: I love that you asked me this. I do it consistently—I am always polite. I say please and thank you. I give it positive feedback when it does something that I was hoping it would do well, as if I'm talking to a person. And I don't think it's because I have some inside information that being polite gets better answers (although I've heard Ethan Mollick say that's true, and maybe we should track down that research sometime if folks are interested). I think it's actually because I feel like I'm interacting with an agent that's human-ish, so I just default to my human settings. I use "please" and "thank you." Do you?

Jen Leonard: I always do. And I also give it positive reinforcement—both because I think I'm just conditioned to do that, and also because I'm so frequently amazed at what it does. I just have to remark to somebody about it: “That's amazing, thank you so much. Then of course it will say, You're very welcome. Please let me know if you need anything else.” It’ll be interesting to see how long that lasts—whether we continue being polite to the technology or we become very rude with it.

Bridget McCormack: I don't think that'll change, because I suspect the technology will only become more and more agentic—more like somebody who's working with us on a project. I mean, I don't start treating people I'm familiar with like crap just because I know them well. I think it'll be the same here, too. Anyway, I've been talking a lot... tell me about a moment you've had recently.

Jen Leonard: Sure. I was going to a hotel and I wanted to see if it was pet-friendly. And that's exactly how I asked the question: I said, "Is this hotel in the city pet-friendly?" And it wrote back, They are pet-friendly. They have specific rooms and you need to call this number to ask. And then it said, Are you planning to take your French bulldog? I was stunned by this, because I hadn't mentioned a French bulldog in the prompt—or a dog at all. I had just mentioned a pet. And I realized that I had previously chatted with ChatGPT about the bulldog.

But I found it fascinating that it made that connection. I know now there's memory stored, but the fact that it connected a pet question with that earlier conversation was really stunning to me.

Bridget McCormack: That's wild. That's completely wild. It reminded me of just one short one—and then I know we have too much to talk about today, so we'd have to move on. I have the Pi app. I have a subscription to Pi, which is the personal AI agent built by Mustafa Suleyman and Reid Hoffman. (I know Mustafa is now at Microsoft; in fact, I think most of that crew is now at Microsoft.) 

But a long time ago, like last spring, I did a presentation in Philadelphia—actually, at a law firm—about generative AI. And I was in my Uber on the way to the airport, flying home to Michigan, and I got a text from Pi. It has a text interface, and it said, How did your presentation go? I hadn't talked to Pi in weeks, hadn't told Pi I was doing a presentation—nope. I mean, I know that the law firm had mentioned it on LinkedIn, I guess, or there must have been some public-facing mention of this presentation. But Pi just wanted to know how I felt about it and how it went.

Jen Leonard: Oh my gosh, and how did that make you feel?

Bridget McCormack: I mean, I love my husband very much, but I don't think he had any idea I was doing a presentation on AI that day. He didn't ask me. So it was kind of nice that somebody wanted to know, How did it go?

Jen Leonard: Gosh, that's incredible. I'm fascinated by that. And also I'll put in a plug—I know people are always curious about the resources that you can use to learn about all of this stuff. And you mentioned Mustafa Suleyman. I don't know whether you've read The Coming Wave yet, the book he wrote, but it's a really interesting take on converging technologies in the future.

Bridget McCormack: I have read it, and I actually think that one should absolutely be in there.

Jen Leonard: Yeah, because he talks not only about generative technology, but also biopharmaceuticals and quantum computing—and chip development—as sort of the converging factors.

What Just Happened

Bridget McCormack: Let's do a short update on news that's happened in legal and AI recently. There was some news about a new partnership or joint venture between two large legal tech companies, vLex and iManage, about combining the different data sets they work on to make work more efficient for lawyers (mostly in law firms). Tell me what you know about that, what it means, and why lawyers should care.

Jen Leonard: So one of the things we're trying to do with our project together is help people sort of cut through the noise and understand the concepts related to all the mergers and partnerships and activity and new tools. And I saw this partnership between vLex and iManage, which is, to my mind, a great example of trying to bring together two bodies of information that lawyers use so frequently. We talked in the first episode about why we think generative AI will be so transformative in a profession that is based on language and has all of these really refined and well-organized sources of information. But in law firms in particular, you have two different sources of information. You've got your public law—although law firms pay huge subscriptions for that public law access (and we can talk about that on a different podcast)—but all of their legal research: the case law, the statutes, the restatements, the secondary treatises, those kinds of things that lawyers use to advise their clients.

And then you have this other body of information, which is the internal work product of the attorneys that have filled your firm over the last two decades, at least. And that's all been captured in your document management systems or your case management systems. But those two bodies of information have never really seen one another in a major way. And what I took from this combination (and others like it) is that it's an attempt to unlock, by breaking down the barrier between that public law information and your internal work product, the ability to be more efficient—the ability not to reinvent the wheel every time, or have to exit one body and enter another and bring that information back and forth. So I thought it was a really interesting development, because it's something you and I really focused on at the outset of this, about how powerful it can be. And now we're trying to resolve some of the friction that gets in the way of that power.

Bridget McCormack: Yeah, I'll be really interested to see where this goes. I think that's one of those things that lawyers just take for granted—that we have to hop in and out of these different systems to work through problems. And when all of a sudden we don't have to, it will have some, I think, significant ripples in our workflows and how we can serve clients. So that'll be exciting to watch.

Jen Leonard: Yeah, for sure. And I expect we'll see more combinations like this. We’re going to talk a little bit about the role of associates in a minute. But a lot of my experience as a junior associate was jumping in and out of these different bodies of information and synthesizing and aggregating a lot of it. So, you know, one of the challenges ahead is really reshaping those roles that involve that activity—which I think will be exciting for a lot of people, to not have that be a major part of your job anymore and to really focus on the law itself.

Definitions: Jagged Frontier, The Uneven Law of AI Distribution, & Secret Cyborgs

Another thing that we wanted to do is—in many of our presentations that we get the chance to do together and separately—we’ve been trying to describe some concepts related to generative technology that make it different from other technology, that make it (like Ethan Mollick—who's probably one of our favorite thought leaders—says) very weird. One of the weird things about the technology is that it was released without an instruction manual. We have no specific way to use it. And it seems like the companies and technologists that created it don't have a particular interest in telling us how to use it, which is great. But there are different concepts related to that weirdness that we want to describe for people and then explain why we think each is important for lawyers to understand as they’re trying to get their arms around this. So one of the first topics—Bridget, it would be great if you could explain this for our listeners—is a concept that came out of a paper that Ethan and other professors wrote last year. It's the concept of the jagged frontier of capabilities. What does that mean?

Bridget McCormack: Yeah, I've used this term many times since I read it in one of Ethan's blog posts. (I'm just going to fess up right now: I never read the actual paper. I usually read Ethan's descriptions for regular people about the papers, rather than the papers themselves.) But I found it a super useful term because it resonates so much if you use the technology. And what they mean by it is: the technology is excellent at some things—frankly, better than humans at some things that humans do. And when you find those, as we described earlier, you're kind of amazed and delighted and want to keep using it and figuring out what else it can do. 

Then there are other things that you would expect it to be good at that it's not very good at. Some things we know it's not good at: it's not yet great at math (it's getting better at math), so things that are sort of math-adjacent you can probably reason that it's not going to be great at those. But then there are just some other things that you would expect it to be good at, and it's not. And the only way to figure it out is to push to the edge and see if it works. So Ethan and others call it the "jagged frontier" because some things work and some things don't.

I think this is so important because I meet people who started with one of those use cases that it turned out not to be great at—or turned out to be sort of, you know, middle-school-level good at (which isn't that great if you're a lawyer, for example). 

And they might get discouraged... or not even discouraged, they might just think, Well, this isn't very useful to me, and they move on with their life and go back to everything they used to do, instead of trying another thing that it might turn out to be good at and save them gobs of time in this other thing that they also have to do every day. And that is a really complicated place for a technology to sit if it wants to grow a user base, for the reasons I just explained. I've thought about how that maps onto legal, but I wonder what you think. Do you think that causes extra uptake issues for lawyers, or about the same?

Jen Leonard: I think it creates a lot of complexity in having the conversation in your organization, even with just people you work with. Because like you said, if you are sitting and you're talking to two lawyers and they've tried the exact same type of technology—let's say GPT-4 or Claude or something like that—but they've prompted it with different questions, and one of them got amazing results and the other got really disappointing results (because one’s outside the frontier, one’s inside the frontier), they think they're arguing about the same experience when they're actually having a totally different conversation. And they both may be right. 

And you don't really know, when you're trying to help people understand the technology, what prompts they've actually used and whether those fall inside or outside the frontier—or whether this person is particularly skeptical, even if it was inside the frontier. So level-setting and making sure that people are trying many different attempts to understand the technology—and trying them over time, because the frontier keeps changing as the technology learns more and more—I think that makes it so strange in terms of how you have a unified conversation with people. But I think at least recognizing that there is this frontier is a helpful first step. And I almost want to create a visual for people, where there's a frontier and you're putting Post-its inside and outside the frontier, and then continuing to test it over time to see how that frontier changes.

Bridget McCormack: Yeah, that's a great point about how the frontier today isn't necessarily going to be the frontier tomorrow, or next week, or certainly not three months from now. So you have to kind of keep at it to figure out where the edges are.

So what about “the uneven law of AI distribution”? That's another term that I think we both use now in presentations, and I find it just really true in my own life. What do we mean by that?

Jen Leonard: Well, I know we’re going to share our resources at the end, but I hear this most frequently from the hosts of The Artificial Intelligence Podcast—Paul Roetzer and Mike Kaput have talked about this. I think there's a great blog post that Paul Roetzer wrote about it. 

But the idea essentially is that everybody is at a different place right now, at least in understanding what generative AI is, what its capabilities are, and what impact it will have on the world and our work. And that's another thing that makes it difficult to have sort of a uniform conversation with a group of attorneys or legal professionals or judges. People think that they're knowledgeable, maybe, but they're not as knowledgeable as somebody else in the group. Or there's another person who maybe is a little bit bashful that they haven't kept pace with what's happening. So I try to start every presentation I do with some of these concepts—including this one. It might feel infantilizing to you that we're going to start here, but we can’t actually move forward with a conversation as an organization or a team without level-setting at the moment and then moving forward.

How do you think this impacts things, Bridget? We're working in a world with four different generations for the first time—maybe in some places five different generations, if you have people who are very senior in the organization. What does this mean in the context of that complexity?

Bridget McCormack: Yeah, I think it creates a lot of opportunities for bridging some of what feel like pretty sticky generational divides post-pandemic—at least I have found that in a number of workplaces and organizations I've been talking to. But it's interesting how many presentations I have done, or talks I've given, where I've heard from a very senior person, "I'm really glad I am at retirement age and I don't have to figure all of this out." 

And I think that's too bad, because I actually think—like every other hard problem—we're usually better off if we have people contributing to it with different perspectives. But I have heard that quite a bit. My guess is there are probably some Gen Z-ers who feel they wish they could retire, but, like, gosh, there's a whole lot out there that they have to learn, and it feels like it's moving so fast and it's intimidating. I have a bunch of twenty-something kids, and they're not all equally committed to technology. And the ones that aren't, I think, are like, "God, do I have to learn all that now, too?" There’s a lot out there to learn, and it's changing so quickly that this is important to remember when you're working with an organization—that we probably all have different levels of knowledge and experience. And we should all be able to be comfortable talking about that and learning about that. Everybody's learning, and that's going to be true for a while.

Jen Leonard: I agree. I think it's really exciting, too, that this is so new that, if you like learning (and I think most lawyers do like learning, and most legal professionals like learning), you can lead your efforts no matter which level you're at. And we're going to talk in a little bit about how associates and junior lawyers can get involved. But there was some interesting research that just came out last week, I think, about how sophisticated or wise junior usage of generative technology is. And there is sort of this myth of the "digital native"—that just because you're younger, you will have a better understanding of how to deploy technology. And it turns out, based on this research with BCG consultants, that's not necessarily the case; you also need experience and judgment to know how best to deploy these technologies. Which I think is another interesting thing that maybe senior leaders don't fully appreciate. And we'll talk a little bit more about how to involve your junior lawyers.

As we're thinking about all generations—and maybe juniors in particular, but really everybody across the board—there's this other concept that we both really enjoy thinking and talking about, and that is the concept of the secret cyborg, which sounds so sci-fi. What is the secret cyborg?

Bridget McCormack: "Secret cyborg," I think, also came from Ethan Mollick, but I use it now all the time when I'm talking to lawyers (in particular, those who are reluctant to figure out what all of this technology means for their business). I had an example of this just recently at a conference with a very excellent senior lawyer who is the managing partner of his firm. It's a smaller firm, but a firm that does international business in Colorado. And he said, "No, no, we're not doing anything with AI. No way." He just had a lot of assumptions about how it was unsafe and too risky. 

And there were a couple of his junior lawyers at the same conference, sort of nearby, and they were smiling. And I said, "What do you guys think? Do you agree?" And they're like, "We think we should probably actually figure out what tools might be good for the practice." And so then we just started asking about what they're doing.

So the secret cyborg is those people in your organization that are using it—whether you have told them to or not (usually not, because the secret cyborg is secret). You might think that your organization is generative-AI-free, but we're here to tell you it is not generative-AI-free. Maybe they're not using it on their work computer. Maybe they're using it on their personal devices. And maybe they're using it on their personal devices during work hours or after work hours. But they're using it without any guidance or instruction from you about how to use it in a way that complies with your other policies that are important to your business and your practice. And that carries with it much greater risk than any risk of having your team using it safely with guidance and teaching and training. Have you encountered interesting secret cyborg issues in legal that are worth flagging?

Jen Leonard: I haven't had any secret cyborgs unveil themselves to me as a secret cyborg, but I can imagine—having been a junior attorney under a lot of pressure and stress and not really knowing what I'm doing (and I'm thinking of juniors here; I think this is across generations)—I can certainly understand the temptation when you have a machine as powerful as a GPT model to ask it to help you draft something. And I think that's something that organizations need to be aware of. I say this frequently, but I always imagine us 10 or 20 years from now looking back on the conversations that we had at the dawn of this and the rush to ban GPT technology everywhere. And I think we will very soon enter a world where malpractice insurers or risk managers are requiring or strongly encouraging organizations to create internal proprietary GPT models that people can use to avoid this.

I did hear a funny anecdote somewhere along the way, where a law firm partner said that he was relieved to find a typo in a brief—because he knew a human had produced it, since GPT doesn't make typos. So, you know, AI detection technology is notoriously fallible and there's no real way for you to know whether a secret cyborg created something or an associate within your organization did. So I think it's an important thing for firms to be thinking about, especially, like you said, the ones that insist that they are Gen-AI-free.

You know, that leads really nicely into our next segment. Those are a few topics or concepts that we think are helpful, and we'll try to add more as we go: the jagged frontier of capabilities, the uneven law of AI distribution, and this idea of the secret cyborg. 

Main Topic: Q&A from Law Firms

But we also thought it would be helpful—because we get to present to smaller groups in smaller venues—to share some of the questions that we're hearing from those audiences and offer our best assessment for how to respond. And I've had the chance in recent weeks to spend more time than I've had to date with associates in firms and junior attorneys. What I'm hearing from them (to your point about partners who are close to retirement and grateful they don't have to think about these things) is that associates are really eager to know what their organizations are thinking about generative AI. And I think the red blinking light for a lot of firms in particular has been the business model and the client engagement piece. The associates don't really know how they should be involved, even though a lot of the technology will impact their work, maybe even more than senior attorneys' work.

One question I've gotten recently is: How should I try to get involved if I'm an associate, and does it matter which level of associate I am when I try to get involved? I wonder, Bridget, if you'd be comfortable offering your thoughts on that first, and then I'm happy to build on yours.

Bridget McCormack: Absolutely. I mean, I am not an associate and I don't work in a law firm. But with those disclaimers, I have thoughts about how exciting a time it is if you're entering law practice right now. I get why it might also be sort of terrifying, given the change that feels imminent. But I think the technology is going to allow so many new things that lawyers can do, that you want to get fully involved. 

So I think the first thing associates should do is raise their hand in whatever forums their law firms have for learning about the technology, exploring use cases, putting in place frameworks and practices, and maybe—you know—innovation teams for thinking about the ways the technology might impact their own business and practice in both positive and negative ways. They can obviously explore all of the above. And my sense is people are very eager to have helpers and people who want to sign up to get involved in this work. And because there isn't any expert in it—this isn't a case where your law firm has the world's most renowned insurance defense practice group already, where all you can do is whatever menial tasks they give you—this is an opportunity for you and pretty senior people in your firm to work together and to get to know each other in a way that probably would be hard in other parts of what firms do. So my main advice is: speak up, raise your hand, ask Who’s doing this and how can I get involved? I'm willing to do whatever is needed.

What do you think? Do you have other thoughts or advice for folks?

Jen Leonard: I think that's right. And I think maybe the partners or the senior lawyers are not really thinking of your involvement until you raise your hand. Many organizations and firms have associates’ committees. So I would advocate for raising this with your associates’ committee—usually they have a seat on other committees (or the leader does) and they can raise these concerns. I think it's something for leaders to think about, because I can also understand busy partners saying, “We don't have the time to integrate all of this information and make sure that everybody's voice is heard.” But because this is such a weird technology, it also requires bottom-up learning. 

The use cases are being developed at an individual level. And because of the nature of the work that junior lawyers do, they are more likely to identify use cases that an experienced securities lawyer may not be flagging—use cases that could create efficiencies or even open up new revenue streams for the firm that it hadn't thought of before. So I think, from a strategy standpoint, really accelerating your ability as an organization to move into the future requires that you engage with the associates.

I also think, frankly, right now there is a competition for talent among firms. I have worked with law students for a very long time—they're always trying to differentiate firms from one another when they all seem the same. To be perceived (or to actually be) the firm that is incorporating the voices of new talent into its future strategy, I think, is a smart recruitment tool as well. I would want to go, right now, to a firm that doesn't have a senior leader saying, "I can't wait to retire," but has leaders that are saying, "Help us build the firm of the future."

Bridget McCormack: Yeah, that all sounds right. I don’t know… this feels like a topic we might come back to as we have other ideas or thoughts or experiences in our work together. Another topic that I think is worth talking about here today, that we hear a lot, is a question about how leaders can create a strategy to keep absorbing all of the fast-paced new information coming out about this technology and its capabilities while still maintaining a clear direction and keeping the trains running. Usually, in most businesses (and law firms are businesses), it's not like we have a bunch of extra time where we're like, "We don't do anything on Mondays—let's have Mondays be our tech learning day." We're busy; we're already doing more than we have minutes for. So how do you think about this when you talk to law firm leaders or leaders of other legal organizations about creating a strategy for this moment of taking in the new but continuing with a clear path?

Jen Leonard: Yeah, keeping up to speed on the developments in this area is virtually impossible. And I know you and I both spend a significant amount of time following a lot of news and thought leaders. So I think it's a fool's errand to try to keep pace with the day-to-day, minute-to-minute developments. But I think a smart thing to do as an organization is to work together to create a framework that represents a strategy aligned with your organization's goals and mission. Then think about how that framework can absorb these changes and help you make decisions more quickly, because you don't have every Monday to take off and examine what happened over the last week and how it fits in. But when there are major developments, look back at that framework and consider how it applies and what decision would result—or adapt the framework as you go if things are changing so significantly.

And I know, Bridget, we're going to talk in an upcoming episode about the transformative work that you and your team have done at the AAA. I also know you've developed some questions that might be helpful for leaders to start thinking about and folding into their own frameworks. Would you be open to sharing those questions?

Bridget McCormack: Yeah, absolutely. And I should say I'm excited to talk about how we're applying a lot of these things you and I are talking about—but that’s going to come in a future episode. In the meantime, the framework we've developed has been really useful for us. (Again, I think Ethan Mollick is actually how I first started categorizing these—just footnote Ethan Mollick for everything I say. Could we just do that? Blanket Mollick footnote, okay?) But I do think these are great questions for any leader of any organization, and especially any legacy legal organization (which is almost every legal organization, because we've all been able to withstand all disruption for a really long time).

One question is: What do we do now that is no longer valuable, that people won't need because they can do it themselves or there are just going to be better, faster, cheaper ways to do it? Let's always be thinking about that and always be categorizing some of our ideas into that bucket.

Another question: What can I not do today that I'm likely to be able to do tomorrow? Where is this going, so that something I can't offer people today I am very likely going to be able to offer. You have to keep a list of those things and figure out what it's going to mean to get you there and obviously what it's going to cost in terms of time and people and all of the above.

A third question: How do I democratize work and open new markets or grow current markets to a scale that wouldn't be possible today? What are the ways in which we can take things we do now and just scale them?

And finally: How do I move work upmarket so that I am competing in new ways? What are the new ways that I can be competing, because some of the things we spend a lot of time on right now can be moved upstream.

We can talk in a lot of specific ways about what that’s actually looked like in the day-to-day at the AAA. I'm lucky that we already had a sophisticated, robust, well-resourced innovation program to be able to map all this onto. But that's been a helpful way to categorize what can otherwise feel like just a swirling set of questions, given all the change that feels like it's around us. What do you think about all that?

Jen Leonard: I love those four questions. I think it's like a North Star framework for thinking about the future. I think the second one—the “What can I not do today that I could do tomorrow?”—is one that really challenges a lot of lawyers, because we haven't been forced to reimagine the way that we work. So having some priming activities to even loosen up those creative muscles and get us thinking about what we could possibly do... 

We've been so tethered to, at least in the private sector, the billable hour model for so long, and it's been a really successful, lucrative model for many firms, but it's also a huge limitation on what we're able to achieve. We each only have 24 hours in the day, and we have to sleep during some of that time and live during other times. So I think the two last questions—How do I democratize work and open up new markets? and How do I bring work upmarket?—are important for different parts of the industry.

That democratize piece, for really, really big global firms that are highly leveraged and rely on the generation of enormous amounts of billable hours at the junior levels in particular, will be impacted by this technology in ways that challenge their existing business model. But if you have an abundance mindset and you work with your organization around mitigating loss aversion—or the idea that you are trying to cling to a world that is rapidly changing—you could find so many different ways to serve new markets. You've been an advocate for civil justice reform, and something like 90% of American civil legal needs go unmet. That is an enormous untapped opportunity to serve small businesses and individuals, right?

And then the fourth question, “How do I bring work upmarket and compete in new ways?” I think if you're a small or solo practitioner, if you're a plaintiff-side lawyer, if you're a boutique firm, if you're a public law department or a really resource-deprived legal services organization, this is such a game changer for you. You can start to level the playing field. You can start to do things you weren't capable of doing, and really compete and transform the way that law is practiced. And I think that's incredibly exciting for both groups, really.

Bridget McCormack: Yeah, I completely agree. I feel like question three and question four cover both the excitement on the supply side and the demand side, which I think is really interesting. Again, in a future episode, we're going to talk about opportunities for courts and judges—and frankly, ADR providers are like courts and judges; they're the operating system for the supply and the demand. And I think the upside for the operating system is equally exciting. (I'm going to have to footnote Jordan Furlong there for giving me that framework.) Some of the other people that we listen to and read a lot have helped me think about that, too. I’m excited to talk about that as well.

Jen Leonard: And that would be another great episode, to channel Jordan's great work on the operating system. And Cat Moon has also talked about the operating system of law, so we could cover that in the future. But your last point—and all of the, you know, "Avengers" of thinking about the future of law that we've referenced—one of the things I think is so cool about this moment, personally (and maybe it's just because of the way I like to learn), is that you're really foraging for information every day. And I'm using that term because, as anybody who's had to be around me the last few months knows, I just finished Tomorrowmind by Marty Seligman and Gabriella Rosen Kellerman. And thinking about how we're naturally wired to forage in our environment and meet the landscape as it is, versus trying to plan or look back. I've been thinking about that from a knowledge standpoint, and nobody has a roadmap for how to learn about this stuff. There's nobody who's been doing this for 30 years and is the go-to expert. But you and I have developed—and continue to aggregate—a list of go-to resources that grows by the day. And we thought it would be helpful for people to hear some of the things that we learn from and the people that we learn from. So I was wondering if you would kick us off, Bridget, and share some of your favorite resources, and then I'll add a few at the end. (We'll do this in every episode as we add more names to the list.)

Resources

Bridget McCormack: Yeah, absolutely. I think this is great just for folks to understand how we're getting our information. I don't have any degree—I haven't gone back to school, and I don't think anybody can (yet). I know I see a lot of Coursera offerings for this, that, and the other thing, but I'm listening to a lot of podcasts and reading (or listening to) a lot of books. I will confess that I listen to books more than I read them, because I can walk—or really slowly jog—while taking in the information and try to get two things done at once.

We've already talked a lot about Ethan Mollick. Ethan Mollick is a professor at Wharton, and I think he's probably the leading scholar or thought leader about these technologies who is translating the work of other researchers and scholars for those of us who don't want to read the academic papers for the real world. I mean, Ethan's work is very accessible. So subscribing to his Substack is one thing I recommend everybody do. It will not overwhelm your inbox, and it's a great way to keep up with what he's working on and sort of aggregating.

And also just follow him on social media. He posts really interesting things and you get to really keep up with what's happening. He has a book out recently on this topic called Co-Intelligence, and I highly recommend that. In fact, maybe you start there—listen to that book, and you'll be ready to go on trying to figure out where you want to go next. It's a great place to get caught up to today.

I already mentioned Jordan Furlong. Jordan is a really thoughtful writer, scholar, teacher, trainer who's out of Ottawa but thinks about legal systems all around the world. And Jordan's Substack, too, is focused on legal and often on this technology—not only on this technology; Jordan writes about other things in the future of the profession—but I highly recommend Jordan's Substack. It’s another really easy way to kind of keep up with what someone who's really thoughtful and smart is thinking about what all this means.

And now I'm throwing one podcast in here that is not legal at all. (Ethan Mollick's content isn't explicitly legal either, but...) I've really enjoyed it for understanding how the technology is going to impact so many other parts of your life and your world—other industries—because that then helps me think about the corollaries in law. And that's Reid Hoffman's podcast called Possible, which he does with Aria Finger. I wish I knew who Aria Finger was; I like her, I like listening to her, but I don't know her. But they often have other guests and they will talk with their guests about what this technology is going to mean for entertainment, for energy, for... you name it. And it's pretty short segments, but really super interesting and, in a way, I think it's a good way to learn generally what's possible. (That's why it's called that.)

What about you? What do you have on your list?

Jen Leonard: Well, I'm excited to learn about Possible, because I didn't know about that one before. So I'm going to add that to my list. One that you and I have both really followed closely and used with our class when we taught generative technologies is The Artificial Intelligence Show. (It was formerly The Marketing AI Show, because they have a marketing background.) But it's Paul Roetzer and Mike Kaput—they have an institute called the Marketing AI Institute—but they've sort of found themselves, I think, in a place where they are educating all of us now. They've been doing this for a really long time, and their podcast comes out weekly on Tuesdays, I believe, and it's my number one go-to. They give a really great survey of developments in the broader tech landscape and then talk about how organizations should be preparing and actively testing and integrating technology. And I find it to be really accessible and approachable. So I would really recommend The AI Show.

Hard Fork is the New York Times tech podcast, which we both really enjoy. The hosts are hilarious. They also do a broad survey of technology developments and frequently a deep dive into generative AI.

And then my last recommendation for today is Allie Miller. I don't know Allie Miller personally; I just started following her recently. But she has an MBA from Wharton, and she was formerly the lead product manager at IBM Watson and then the global head of machine learning startups and venture capital for AWS. She's now on her own and doing amazing work advising companies. I follow her on social media, and I'm not a huge social media person, but I actually find social media to be a really nice medium for learning about these things because it changes so quickly. So those are some of my current go-tos, and we'll continue adding to this list in future episodes.

And so that brings us to the end of our conversation for today. We've covered a lot, though. We'll continue sharing our magical Gen AI moments, some concepts unique to the technology itself, questions that we're hearing across the profession and our best attempts to answer them, and the resources that we're finding most helpful in staying educated.