Summary
In Episode 3, co-hosts Jen Leonard and Bridget McCormack focus on how the American Arbitration Association (AAA) is becoming a model for AI-integrated legal service innovation. McCormack, AAA’s President and CEO, shares how her team has embraced generative AI to modernize dispute resolution processes, support internal operations, and improve access to justice. The hosts emphasize that AI adoption is happening whether leaders plan for it or not, and strategic guidance is critical to realizing its benefits while managing risks.
Key Takeaways
- AI Adoption Is Inevitable—Strategy Is Optional, but Risky: Most employees are already using AI tools informally. Without a clear strategy, organizations risk ungoverned, unsafe use.
- The AAA’s Innovation Playbook Is a Case Study in Change Management: With structured processes, cross-team involvement, and culture-building, the AAA demonstrates how legacy legal institutions can embrace modern, generative technology—fast.
- New Tools Like Clause Builder AI Are Already Live: AAA’s Clause Builder AI, trained on high-quality arbitration clauses, lets users generate custom contract language via natural language prompts—improving usability and access.
- Leadership Requires Rethinking Work—Not Just Tools: The AAA empowers employees to explore how their tasks may evolve with AI, giving credit for experimentation and adjusting performance metrics to reward innovation.
- Mission-Driven Expansion Can Support Access to Justice: AAA’s acquisition of ODR.com positions it to create faster, lower-cost dispute resolution tools for underserved individuals and small businesses—a major step toward scaling civil justice.
Transcript
Jen Leonard: Welcome back everybody to our regular podcast series, 2030 Vision: AI and the Future of Law. I'm your co host, Jen Leonard. I am thrilled, as always, to be joined by Bridget McCormack, President and CEO of the American Arbitration Association. Today we're going to learn a lot about the American Arbitration Association and its efforts to integrate AI into the future of alternative dispute resolution.
We decided to start this podcast series after traveling together to present to various legal audiences about our excitement for generative AI and what it means for the future of law. We frequently sensed a great deal of interest and excitement, but also uncertainty about where to go for more information and how to get started with understanding generative AI. So we wanted to share what we've learned along the way—resources, our experiences working with generative AI, and an optimistic perspective about how this technology can transform legal services, legal education, and the practice of law for the better.
At the same time, we recognize that many thoughtful people are considering the risks and challenges inherent in moving forward in a generative AI era. So welcome to our learning journey!
To kick it off, each episode we like to share our gen AI moments since our last conversation—those day-to-day trials and magical surprises we encounter with large language models. Bridget, I'm going to turn it over to you to share your gen AI moment from the last couple of weeks.
Gen AI Moments
Bridget McCormack: It's great to see you, Jen. I'm excited for today's conversation. My Gen AI moment this week is actually from one of my team members, Jason Cabrera, who works in our marketing department as an assistant marketing manager. Jason has really embraced every generative AI tool he can to figure out how it can make his work more efficient and enjoyable, and to see what each tool can do well with his guidance.
He recently took a course offered by the Marketing AI Institute—it was an online certification program taught by Paul Roetzer and Mike Kaput. Jason earned his certification on the very first day and was super proud of it (he even posted it on LinkedIn, and Paul Roetzer commented on his post, which was very cool). But what's even cooler is what Jason did next: the course was about scaling AI within your organization, and he decided to put that into practice.
He realized that the future of knowledge work will be heavily impacted by AI, so it makes sense to start examining the tasks our knowledge workers do. Across our organization we've been learning and cataloging our tasks together. Jason prompted a few different large language models with a list of all the tasks from his job (which he pulled from our HR website) and asked each LLM to evaluate which tasks it could handle 50% or more of with no additional tools.
Then he asked a second set of questions, this time giving the models various additional tools or context, and had each model create a matrix for him. The matrix showed which of his job tasks the LLMs were not going to help with at all, which tasks they could help with considerably (with no extra effort or tools), and which tasks fell somewhere in between. He ended up with this unbelievable matrix and sent it to me and the rest of my leadership team. Now he's proposing we set up an internal academy so all of our staff can start thinking now about how their jobs might change and how they can take advantage of those changes for the better.
I just thought that was a really exciting use case—Jason took the initiative and wants to help us lead in this new way at the AAA. How about you, Jen? Do you have a gen AI moment from the last two weeks?
Jen Leonard: I do! First, congratulations to Jason on that certification—that’s really cool. It also reminds me of something I’ve run into in my own trial-and-error with LLMs: sometimes I think, Can it do this? Can it do that? What would it be good for?—and it doesn't even occur to me to actually ask the LLM itself. So I love that Jason did exactly that. Very cool.
My experience isn't one specific moment this week, but more of a global experience. I've been working with the new Claude 3.5 (Sonnet) model that came out a couple weeks ago. It lets you upload files related to a project you're working on and gives you a split-screen view: on the left, you see your prompt and a very straightforward output, and on the right, you see a sort of chain-of-thought showing how the LLM is generating the output.
In my work with my partner, Mariel, we often start creative projects from scratch—legitimately creating workshops and educational experiences. We were facing a blank page for a workshop that we were designing to achieve a certain learning goal, and we were just racking our brains about how to structure it. So we opened Claude on a shared screen, gave it some context and our learning goals, and then asked something like, “What are some activities we could design for lawyers to learn about this?” Mariel, who doesn't spend as much time looking at these tools, was amazed. I was amazed too, especially seeing its suggestions side-by-side with our prompt. It's just so good at creative and strategic problem-solving—it really felt like having a thought partner.
Again, we'll talk more about this with your experience, but I think these tools work best when you come in with creative ideas and also enough expertise to know which suggestions are going to work and which aren't. There were a few suggestions that made us laugh—things where lawyers would roll their eyes—but there were a couple of ideas we never would have come up with on our own.
We found ways to adapt those ideas for a legal audience, and it saved us a ton of time while really benefiting our clients. So that was my gen AI moment for the week.
Bridget McCormack: Great! I'm glad to hear about that. I haven't tried that particular use of Claude yet, although I've been really impressed with this new version of Claude for the tasks I do every day. I'm excited to try using it the way you did—that's a daily use case for a lot of people. I'm glad to hear how well it worked.
Jen Leonard: I'm also reminded, as you're describing this, that I use these tools a lot for business strategy planning. For example, I once asked an LLM to identify different revenue streams for our business and to give me advice on developing a more strategic approach to our service offerings. It was really interesting: the areas where we have the greatest demand (and revenue) also tend to be more discrete, one-and-done projects. The LLM actually suggested prioritizing more sustainable projects—even if they're lower in dollar value—when it came to long-term strategy. And it's logical: if I sat and thought about it, I might have come to that conclusion, but it's not generally how I approach business development.
Bridget McCormack: Yeah, that's really interesting. It's another example of how these tools can be strategic in ways our human brains might take longer to reach. I agree you would have gotten there eventually, but the AI does it almost automatically, right? It just does it immediately. That's a real benefit of using it as a thought partner.
Definitions: Probabilistic vs Deterministic & Hallucinations
Jen Leonard: One of the other things we wanted to do on this podcast was provide a couple of concepts or terminology each episode that might be alien to lawyers. And one I want to toss to you, Bridget—because it ties nicely into the work you're doing at the AAA—is the idea of artificial intelligence being probabilistic versus deterministic. What do those terms mean?
Bridget McCormack: A probabilistic AI model generates outputs based on probability distributions. So, each outcome can produce different results, even with the same prompt or the same question. There’s some randomness and uncertainty involved. You could view it as flexible and creative, but also less predictable—which, as you might imagine, is quite disconcerting for lawyers. I’m sure we’ll talk a lot about that.
Most large language models like GPT and Claude are probabilistic models.
Deterministic is when outputs are fixed for a given input. So, no matter if you ask the same question ten times, you're going to get the same answer each time. The outputs, like I said, are fixed. There isn’t, therefore, any randomness involved in working with a deterministic AI. It’s more consistent. It’s reproducible. It's also less creative—can be a little less fun—but for some rule-based systems and some AI systems, that is the best model. You can imagine some contexts where that’s the only model somebody would want to use.
And I think most lawyers are used to artificial intelligence that is deterministic. They can look under the hood and develop an algorithm where you give it certain inputs, it gives you an output. And I think that’s part of the angst lawyers are feeling now—this is a different kind of technology. It makes many of them, I think, very uncomfortable that you don’t have that reliability of deterministic outputs.
Bridget McCormack: Yeah. And we’ll have this conversation over and over again, but it is interesting because obviously, you know, humans are more probabilistic than deterministic, right? I know you are a very smart lawyer. I think I am a fairly smart lawyer. You could ask us the same legal question and, like many legal questions, if there’s nuance, we might give slightly different answers.
We would certainly put them differently, but they might even be slightly different answers. And that’s—in some ways—what lawyers do. We try and figure out the best answer to the question before them. But I do understand the discomfort in working with a technology that might not give you the same answer to the same question 30 seconds apart—or one second apart. So, it’s interesting.
How about the term hallucinations? Everybody talks about hallucinations, but what exactly does that term mean in the context of generative AI?
Jen Leonard: Hallucinations have gained a lot of attention in the legal space because of this probabilistic model. The way these models are trained is essentially as a predictor of the next most likely word or token in a vector of the language on which the models have been trained.
So, there’s not actual thought happening behind these predictive outputs. And sometimes what happens is the machine generates something that, in its training, is what’s likely to come next in that sequence of words. But in the context of the question we're asking, it doesn’t make sense—and appears to be completely made up.
And of course, the most famous example that set us all back months in the conversation is the ChatGPT lawyer from last year, where ChatGPT hallucinated a case name that didn’t exist, and the lawyer filed a brief containing that case name with a federal court.
But that’s my understanding of hallucinations—it’s that sort of probabilistic outcome that is non-existent in our sense of reality.
Bridget McCormack: Yeah, and it’s definitely been a significant stumbling block for a lot of uptake in legal—and understandably so.
I did a presentation last week to the appellate judges in the state of Missouri, and I sort of did what you and I often do together—a “state of the technology” and then, you know, what are some of the immediate pros, and some of the things you have to be looking out for. And hallucinations—and that one lawyer in New York who set us all back about five years—was mostly what they knew about generative AI.
Jen Leonard: I feel like maybe one of the reasons it’s so troubling to lawyers is because, like you said, we are actually very probabilistic in the way that we write as humans.
And it’s interesting, as you were saying that, I was thinking back to very early days when lawyers would say that the value we add is our own individual style and tone and bite to our writing. But it also seems to be the thing that makes lawyers uncomfortable. Because we don’t typically hallucinate.
You might have a junior lawyer who doesn’t understand the holding of a case or misses a nuance, but they’re generally not submitting work product to a judge or a partner that has completely made-up case names.
Bridget McCormack: I think that’s absolutely right. You don’t see that. You do, however—I think sometimes with junior lawyers, or at least with brand-new clerks—see citations that kind of miss the mark. Because a brand-new lawyer who might not understand the larger context in which the legal question arises just doesn’t have the experience yet to figure out why the answer might not be the best fit.
Jen Leonard: So, those are a couple of concepts that we thought might be helpful for people to understand. And I think both concepts—especially this probabilistic concept—are good things to keep in mind as we move into the next portion of our conversation, which is really learning from you, Bridget, about your efforts to lead new initiatives and to think differently about how AI will impact alternative dispute resolution.
Main Topic: The Importance of Innovation and Collaboration in AI Adoption at the AAA
Jen Leonard: So for the next portion of the podcast, this will really be an interview by me of you, so that the rest of our audience can learn what it actually looks like for an organization to become AI-emergent—again, to use Paul Roetzer’s language. So if you’re okay with it, I’ll lead us through a series of questions to get a sense of what you and your team have been up to—which is a whole lot.
Bridget McCormack: Yeah, let’s do it. I think it’ll be fun.
Jen Leonard: OK, so just to level set: you are, as we mentioned at the beginning, the CEO and President of the American Arbitration Association. And many, many lawyers are familiar with the AAA—may have engaged with the AAA, particularly litigators. But just so everybody knows, can you explain in simple terms what the American Arbitration Association does and what its mission is?
Bridget McCormack: Yeah, the AAA is the largest provider of ADR—alternative dispute resolution—services in the world. We provide arbitration and mediation and other dispute resolution services to parties who want to resolve disputes outside of courthouses. And that’s a lot of parties.
We do this across the United States but also across the globe. As is probably not surprising to anybody listening, disputes about cross-border transactions are very often decided in arbitration. Because of course, it would be complicated for any company to cede their dispute to another country’s court system. So arbitration is a common way for cross-border transactions to resolve disputes.
But domestically, it’s also a choice that lots of businesses make. A lot of B2B contracts have ADR clauses in them that call for arbitration as the method of resolving disputes, should they arise. There are also arbitration clauses in lots of common consumer contracts. And the AAA has been a national leader in requiring due process protocols in contracts between businesses and consumers—because obviously, if we’re going to be providing dispute resolution services that courts enforce, they have to be fair and trustworthy.
So that’s what the AAA has been doing since 1926—almost 100 years. Providing ADR services. And we provide a lot of them. Last year, we administered over half a million cases, just to give you a sense of the volume.
We have 28 offices across the United States, one in Singapore, and we have talented people working on all of those cases all the time. It’s a wonderful organization with really mission-driven people. One of them described it as “making the world a better place by giving people more and better tools to resolve disputes and get back to their lives, get back to business, and often preserve relationships.” So that’s what the AAA is and has been doing since 1926.
Jen Leonard: So—half a million cases a year. I know your previous life was as Chief Justice of the Michigan Supreme Court. You have a lot of experience with state court administration and adjudication. How does that compare with the number of cases that are brought in state courts every year? Do you know?
Bridget McCormack: I do. I can at least give you the Michigan numbers. Michigan is a large state, though not the largest in terms of court dockets. The Michigan courts administered between 3 and 4 million cases in a year. Of course, the criminal dockets are heavier than the civil dockets. At the AAA, we don’t do criminal cases, obviously—these are all contract disputes.
So, in terms of the civil dockets in state courts, I’m sure the AAA’s docket is larger than some states’ and smaller than others, but it’s not that dissimilar, to be honest.
Jen Leonard: And we know that that volume in the civil courts—in state courts—creates all sorts of challenges to administering justice. And we could talk about that in a few minutes as we discuss your vision for the AAA.
But just to stay at the intro for a minute: so two parties, generally companies, opt into private arbitration. So the data is not public, correct? You have 100 years of data around private dispute resolution.
Bridget McCormack: That’s right. It’s not public.
Jen Leonard: And how many people work at the AAA globally? And it’s also the International Centre for Dispute Resolution as well?
Bridget McCormack: Correct, yes. The International Centre for Dispute Resolution—or ICDR—is our global arm. It’s a few decades old, and we have a talented team working for the ICDR. There are about 720 people who work for the AAA–ICDR right now.
Jen Leonard: Now—big team, lots of volume, lots of data—and you stepped into the role as the leader of the organization about a year and a half ago. And as you did, it was about two months after ChatGPT emerged, and a couple of months before GPT-4 came out.
What did you start to see as you learned more about the organization and about emerging technologies that played into your strategy for the future of the AAA?
Bridget McCormack: I accepted the job in September of 2022. That was two months before ChatGPT was released into the wild. I didn’t start the job until February 2023. But it was pretty clear to me—between November and February—that there was significant change coming to the business of law, the practice of law, and therefore to those of us who serve lawyers and their clients. And probably coming fast.
So I made it a point with my leadership team to say, we’re going to dive in with all of our energy. We had a series of legal futurists come and educate us quickly about what was happening and where things seemed to be headed. And then we stood up lots of teams to start learning in different categories of the organization, because it was very clear that we had to figure it out—and figure it out quickly.
Jen Leonard: I know—I sort of had the opportunity to sit on the sidelines with some popcorn and watch you as you were moving in this direction. And I remember asking you at a certain point, “Where did you find all of the colleagues who could help stand up all of these projects?” And it sounds like you were really blessed and fortunate to come into an organization that was already doing a lot of innovation work.
Bridget McCormack: Yeah, I should have said that. I feel incredibly lucky that the AAA had a robust, structured innovation program in place when I showed up.
There was a vice president of innovation who had a team that worked for her and reported to her. And then, within the first two months of my being there, every single person across the organization did three hours of innovation training. Every single person.
We have a wonderful platform called BrightIdea that connects everybody with ideas they can post, comment on, and vote on. We move those ideas to Go Teams for design sprints, gather user data and input—customer discovery—and then move things quickly through an innovation pipeline into practice.
Some of those ideas result in small changes that help our internal operations run more smoothly. Others are customer-facing and add value to what customers get from AAA already. But the structured, robust innovation practice was already there—I didn’t have to build it.
I think if we’d had to build that, that would’ve been step one. But we got to skip that step and use the platform as a springboard into this new AI discovery phase.
Jen Leonard: That’s really interesting. And it wasn’t meant to be a course correction to anything you said—it just made me think. I remember you had mentioned the three hours of innovation training, and I thought, Where did that come from? And it came internally.
I know Bob Ambrogi, who’s someone else we both follow, wrote an article recently about the “haves and have-nots” in legal spaces. The ones with really big, robust teams right now have an upper hand in thinking about these things.
And yet, I mean—it still requires enormous leadership and enormous change management to shift the goalposts. And it’s funny, because I was preparing for a Gen AI presentation the other day for lawyers and judges—a pretty well-educated, sophisticated group—and someone said, “We need to slow down because this is going to scare a lot of people.”
my response was kind of: I think these people can handle this. Their training is to handle people’s biggest challenges. Everyone’s a grown-up. But how did you have conversations to clarify a vision that was new to everybody, and bring people along? And what advice would you give to other organizations that might be afraid of freaking people out?
Bridget McCormack: Yeah, I do think that “not freaking people out” is actually an ongoing challenge. Even among the talented senior team, there were some skeptics at the start. I remember one of my team members—who is unbelievably valuable—saying something like, “Yeah, and I remember when the blockchain was going to change everything… and that never happened.”
There are skeptics at every level. But I think the fear is: If this new technology really is going to disrupt knowledge work... is it going to disrupt my job? That’s an ongoing issue. I think it’s really important to be communicating regularly with all your teams about it.
In terms of our overall strategy, in a way, we were lucky. We had a new CEO. The last CEO had been in the seat for 10 years. The last strategic planning process was maybe six or seven years ago—interrupted, of course, by a pandemic that changed everyone’s strategic plans.
So it was actually probably lucky that I showed up at an inflection point anyway. It made sense for all of us to roll up our sleeves and figure out: What does this look like? What does it mean for what we’ve traditionally done, and what might we do next? Let’s think expansively and creatively, because in our view, it gave us an opportunity.
It was also important to involve everybody. So one of the things we did was we set up these user groups—for anyone who wanted to raise their hand. We got them the $20/month GPT licenses, and eventually Claude licenses. As the tools became available, we had some folks from legal, but also case managers and even innovation teams, they were using it regularly and uploading the use cases they were discovering. They had regular meetings, and some of their ideas got moved into the innovation pipeline for formal processing.
We did the same thing with a group of our panelists. The AAA has over 5,000 panelists—independent contractors, not employees, but important members of our community. It’s important to us that they are well supported and well trained. We asked for a group of them to raise their hands, too. We got them licenses for legal-specific LLM tools—CoCounsel, and eventually VLEX and Clearbrief—so they could do the same: experiment, meet, upload use cases to a shared site. They’ve now written a paper and done presentations about what they’ve learned and the ways they’ve found this can positively impact their practices.
And then when GPT-4 started coding last April, my very talented IS and innovation leader, Diana Didier—who you’ve met—figured, “Okay, well, I guess now our engineers are going to have to start figuring out what this means.” Because if it gives them significantly more capacity, and it’s also going to become important for what we build in the future, we want to make sure that our team are the ones who learn how to build with it.
And like every other thing about this technology, it didn’t come with a manual. There was no, “Here’s how you learn to code with these tools.” But our engineers did the same thing—we brought in some help, but we made sure everybody had the information they needed to start experimenting.
Jen Leonard: Gosh, I have so many questions about what you just said, and I’m not sure exactly which direction to go. So I’ll start with the last one I wrote down.
I get the chance to visit with a lot of different organizations thinking about innovation broadly and Gen AI specifically. Many of those organizations are law firms, which I think are somewhat uniquely situated. If they take their foot off the billable hour gas, they’re letting go of revenue in the short term to try to figure this out.
So that might be slightly different from the AAA. But I imagine in any organization, you’re asking people to carve out time—either in addition to, or replacing, things they’re used to doing or feel obligated to do. Was that a challenge as you led the organization? Helping people feel supported to take that time?
Bridget McCormack: Yeah, it’s a great question, and really important to call attention to, because everybody feels overworked all the time—and they are. We have a really committed staff, and they work incredibly hard.
Whenever I ask folks to come to an all-team webinar, I always apologize. I say, “I understand you’re giving me this hour, and that means you’re going to do an hour’s worth of work some other time.” I know that’s important.
They have innovation goals, and their KPIs can be satisfied by some of this participation in our generative AI discovery. We also allow them to get training hours based on listening to this podcast, for example. So they can do generative AI learning to satisfy some of their work goals. That feels important—to make it as easy and attractive as possible for people to participate, not just feel burdened by it.
But it’s also something we’ve talked about at the senior level a lot. If we’re asking people to do this, we have to make sure they have the room to do it. You’ve probably heard stories about IBM giving everyone a day a week for innovation—time to come up with new projects. How you make space and time for people is pretty important if you want them to do something other than what they have to do every day—which is always a lot. So yeah, it’s a constant conversation. I’m not saying we’ve solved it, but we’re always talking about it.
Jen Leonard: I love that. And it makes me think of another sort of common challenge with organizations and innovation. Step one—especially in legal—is to get people to tap into their creativity and ideation. Feel comfortable and supported coming up with new ideas. But then sometimes, that can get out of control—and everybody has an idea about how we could do everything better.
It sounds like maybe you have some mechanisms and matrices you use to select pilot projects. I’m sure our listeners would be curious how you select, from all these great ideas, which ones to pursue.
Bridget McCormack: Yeah, we do. And again, we’re lucky to have a structured innovation program, because that’s given us the opportunity to build the muscle around how to get the information you need to decide whether to invest more in any given idea.
So, not just generative AI ideas—though we now have so many of those—but all ideas that come through our pipeline. We’re applying the same matrices and checklists to those, too. Like you said, you can get people so good at ideating that you end up with far more ideas than you have resources to pursue. So how do you choose between them? That’s a big part of our innovation pipeline. It’s why we have Go Teams and design sprints and ways to get quick, informal user input before we move forward with anything.
With the Gen AI ideas, we’re treating things a little differently right now. This is still an experimental phase, so we’re willing to do more exploration—build things that cost time from engineers and innovation leaders, but may not bring in revenue—because the learning and training we’re doing is helping prepare for bigger bets down the line.
Otherwise, I think the questions fit into the traditional buckets about how we decide which ideas to move forward with. We now have a separate Gen AI SteerCom—a steering committee that meets once a week to go through our project list and status. Sometimes there are trade-offs. We might have two projects at step three of a ten-step process, and we have to make a hard call about which one to pursue—or decide to slow down both to invest in a bigger bet. So there’s a lot of complexity there. I think having an excited and talented team is the whole ballgame, honestly. This isn’t something any one person can do alone. You need your whole leadership team involved in these conversations.
Jen Leonard: Well, that’s a perfect transition to the next question I have, which is—you said at the outset that you involve everybody.
I think some organizations have the misconception that the technology department alone can solve these problems. That they’ll figure out how to train people, roll out new tools, and be done. So I have sort of two questions: One—can you dispel that notion for us and describe why this technology just doesn’t work that way?
And two—I’ve seen, and I’m not saying this is the case in your organization, but I’ve seen a lot of burnout among CTOs. Because they’re learning this technology for the first time, too. It’s weird, it’s transformative, and the teams are often not sized appropriately to figure out transformation for the whole organization. So: why is it important to bring everyone along? And how do you support the technologists in your organization as they navigate this area?
Bridget McCormack: Yeah, both excellent questions.
I’ve said this many times—your IS department, or your IT department, is not your R&D lab with this technology. Your entire staff is your R&D lab. Jason Cabrera in marketing is doing incredible R&D work for us right now. He’s figuring out what are the ways this technology can really impact our jobs in a positive way—how we can move parts of the job that AI can do easily to the technology, and focus our human attention on the parts that make a difference for the users who depend on us.
And that’s true of our HR director. It’s true of every case manager who has an idea—like, “What if we gave the parties a tool built with this technology to help them search our panelists using natural language?” That customer service team—who really knows what people struggle with—they’re just as important to your R&D effort. There’s no way this can be a tech-only solution. Anyone can start using these tools right now—you don’t have to wait for IT to enable it. Everyone matters in this effort.
And as for your second question—how to support your IS team—it’s a really important thing to talk about early. Again, there’s no manual for this. No training program for engineers. They’re all learning together—on GitHub, from one another, from Reddit, wherever—about how to figure this out. So, what we’ve done is try to give them freedom to learn. Some time to play with the tools. Some outside support where we can offer it.
For example, we have consultants in Tel Aviv. Our lead Gen AI engineer, Yogesh, meets with their lead Gen AI engineer, Yoel, and they basically share discoveries—like, “Hey, this weird thing happened, what do you think?”
We’ve also traditionally had a larger offshore bench that supports our IS team when we have more to build than we can handle internally. That team has been crucial in giving us capacity and support as we expand.
But I think it’s smart for organizations to seriously consider resourcing their IS teams—and their data teams, if they have them. If you’re looking for places to invest for the future, that’s a good place to start. And we’ve been doing a lot of that.
Jen Leonard: OK, so I wanted to shift gears a little bit—still focused on all the efforts you have underway—but one of the leading pieces of literature on innovation that you and I both have been drawn to is Clayton Christensen’s Innovator’s Dilemma.
That idea that when you’re in a really highly structured, historical institution, it tends to be set up to serve a specific world that may no longer exist when new technology comes to the fore. And you recently decided, as an organization, to make an acquisition of ODR.com and Resourceful Internet Solutions. So I’m curious—why did you make that decision? And how does it play into broader principles around strong innovation?
Bridget McCormack: Yeah, this has been a really exciting time for us—acquiring the ODR team and the RIS team. I don’t even use the word “acquiring,” even though that’s technically what happened. We really feel like we’ve just found a new set of partners for scoping out and planning for the future of dispute resolution.
As I said earlier, we’ve been really busy at the AAA—even before we joined forces with the ODR and RIS teams—building our own generative AI tools. We have tools for our customers, we have tools for our internal teams, we’re putting them out to market. And we’re going full steam ahead on finding tools that make our operations and our services better, faster, stronger.
But at the same time, the ways in which we’ve served the market—businesses, organizations, individuals who want a different way to resolve disputes than traditional litigation—have been mostly in the same lanes. We do a lot of arbitration. We are called the American Arbitration Association, after all. And mediation, of course, has been around for a long time. But there’s going to be more and more demand for new ways to resolve disputes.
People have been resolving disputes on the Amazon platform and the eBay platform with basically automated decision makers and ODR processes for a couple of decades now—and it’s going just fine.
There are lots of disputes that might be better resolved with a lighter touch. A faster, cheaper process—even than arbitration or mediation, which are already significantly less expensive than litigation. But still, arbitration and mediation might not be the right fit for every dispute.
As Colin Rule, the CEO of ODR.com, says: “We want to fit the forum to the fuss.” And sometimes, an ODR product—an online dispute resolution product, maybe with a mediator and maybe without—could be a better forum for a particular fuss.
That could even be true for our legacy clients—big businesses that will continue to want human beings to look witnesses in the eye and hear testimony in high-stakes, “bet-the-company” disputes. But those same users might also have lower-stakes disputes where a totally different process—a fully online, fast, automated one—makes more sense. That’s why this particular acquisition made so much sense for us. We can continue—with this new team—to enhance our legacy services and products. I think we kind of fit the Clayton Christensen model of a successful legacy organization that’s now building a second track for future innovation.
So we can continue to do what we do and grow that core. But at the same time, we can build this new business—new ways of resolving disputes—that’s pretty exciting. Because there are lots of disputes out there that just didn’t have a place to go before. And we think, given this technology, we’re the perfect organization to provide those options.
Jen Leonard: I think that’s so cool. Both because I love to see the theories in that work come to life—having somebody actually do what Christensen prescribes in that book—but also because I think it’s a good lesson for this era. We’re so driven by loss aversion. Fear of losing what we have or losing our jobs. And what you’re doing feels like not only growing the work of your internal team, but also creating non-disruptive, creative ways to grow new opportunities—across the organization and for the broader world.
Bridget McCormack: Yeah, that’s exactly right. You want to make sure you’re continuously improving what you’re doing right now. That’s really important. But you can’t just do that. You have to also be launching new solutions for the customers who are using your current solutions—because they might want different options in the future. So it’s really fun to be able to do it all.
And honestly, the two teams really get to leverage one another. I’ve said: one plus one makes way more than two in this case. It’s been really fun to think about what we can build together.
Jen Leonard: It’s also—one of our mutual friends, I think—Ben Barton, has written with Judge Stephanos Bibas of the Third Circuit a book called Rebooting Justice. I don’t know how long ago that came out—maybe ten years?—about online dispute resolution, drawing inspiration from eBay. So I’m sure they are very excited to see this come to life. Was that part of your inspiration in thinking this through?
Bridget McCormack: Yeah! I mean, Colin Rule, the CEO at ODR.com, is basically the grandfather of online dispute resolution. And he and Ben Barton were roommates in college.
Colin is not a lawyer, but I don’t know anyone who cares more about alternative dispute resolution than Colin Rule. Ben Barton once wrote about him—something like, “He’s the most dangerous kind of entrepreneur.” He said Colin’s not a great technologist, he’s not a lawyer—so he’s not really an expert in how lawyers resolve disputes—but he’s the most dangerous type of entrepreneur: a true believer.
He really believes in the mission of giving people better tools to resolve disputes in ways that preserve as much of what they want to preserve as possible. So yeah, Colin Rule was an inspiration for Ben Barton. I had Ben speak to the Michigan judiciary at our All Judge Conference five or six years ago. So it’s kind of fun for me to now be working with Colin.
Jen Leonard: That is a very cool fun fact! I love Ben Barton. And mission-driven entrepreneurs and change makers are very hard to dissuade or deter. So that’s amazing. And you—I would put you firmly in that category of a mission-driven change maker. I know for a very long time you’ve been sounding the alarm about the civil justice crisis, and access to legal services—including dispute resolution—for the average person and for small businesses.
How does this acquisition—and your broader strategic vision for the AAA—align with your long standing goals to really make some progress in that arena?
Bridget McCormack: I mean, I do think this Gen AI moment informs all of it—and makes it that much more exciting. I won’t bore you with numbers, but 92% of Americans can’t afford legal help with their civil justice problems. That includes not only small businesses but also most medium-sized businesses. They’re legally naked. They just can’t afford lawyers. So they do their best to muddle through what we know is a weird language, with weird rules that are hard to find—and sometimes hidden.
It’s kind of unfair. And therefore, for many civil disputes, people either try to navigate on their own or just give up. I think this new technology—and a mission-driven entrepreneur like Colin Rule, who genuinely wants to change the way humanity resolves disputes—gives us an opportunity to reach this big blue ocean that lawyers couldn’t fish in before.
I think there will now be tools that provide an operating system for people to resolve disputes—and for lawyers, frankly, to help them. Especially creative lawyers who want to help with new business models. That, to me, is really exciting. It feels like perhaps the breakthrough that those of us who care about access to justice have been thinking about—and talking about—for a long time.
So I’m excited that, with the team at ODR.com, we’re going to be able to stand up processes that are fast and fair and trustworthy—for disputes that typically, people gave up on or were dissatisfied with after trying to navigate alone.
Jen Leonard: And I’m going to sort of steal this line of reasoning to respond to future allegations that I’m making people feel too afraid—lawyers and judges who have serious obligations to society—and reframe it in this way. Also, I love all the emerging vocabulary—from “legally naked” to “fit the forum to the fuss.” I love that.
One example I want to focus on—because I’ve seen you share it on social media, and I’ve heard you talk about it—is the Clause Builder that the AAA developed using generative technology. Can you describe what the Clause Builder is and what it does?
Bridget McCormack: Yeah, absolutely. The Clause Builder AI tool—which went live a few weeks ago (we had it in beta for a while as we fine-tuned it)—is a version of our legacy Clause Builder tool. We already had a tool for people who wanted to put together an arbitration or mediation clause—an ADR clause—for a contract. It used a traditional drop-down choice box format.
Because, like I said earlier, not just individuals—but many businesses—have non-lawyers trying to make sure their contracts will serve them in the future if there’s a dispute. And they need an ADR clause that courts will uphold. So we had a tool to help them with that—a legacy tool.
But it occurred to us pretty early in our exploration of generative AI that we could use a library of perfected clauses—which we have, because we review clauses all the time for businesses who want to make sure they comply with our due process protocols, if they want us to adjudicate their cases someday. We also have lots of experience with clauses that courts will uphold.
So, we trained a GPT-based large language model on that internal data and created a natural language interface. Now, someone can come to Clause Builder AI and say, “I have an employment contract, and I’m looking for a clause that will send us to mediation—or mediation followed by arbitration if mediation doesn’t resolve the dispute.”
And the AI will start a conversation back with them—asking about other things they might want to include. It’ll offer options and suggestions, but it’s working with the user to generate the clause that meets their needs in the event of a dispute. We trained it internally on that data, then had a lot of internal users practice with it—trying to trick it, giving it difficult or edge-case questions.
Then we invited external users—really, anyone interested in helping us fine-tune it—to test it further. And once we got comfortable enough with its outputs (recognizing it’s probabilistic, not deterministic), we made it publicly available. It’s not a revenue stream for us. It’s a service—just like the legacy tool was. To the extent we can give more people more tools to resolve disputes better, that’s part of our mission. So that’s a pretty exciting one.
We had one other tool before that—something we launched a few months ago. It’s an automated scheduling order generator. After a preliminary hearing in an arbitration, we use the Zoom transcript from that hearing, and a tool trained on a bunch of previous scheduling orders, to produce a draft scheduling order. So, instead of the arbitrator spending a bunch of time writing one, the tool gives them a draft. They can review and tweak it. And that means the parties pay less for that task.
The next one we’re working on is a filing chatbot—a natural language interface for people filing cases who might not understand the process yet. That’s one I’m especially excited about for users who don’t have lawyers.
And then the one after that—I think, I mean, we have a long list—is a panelist search tool. We want to give both our case managers (who are trying to put together lists of potential arbitrators or mediators) and eventually users the ability to interact with our data using natural language.
So instead of a manual database search, they might say, “I have a construction dispute involving solar energy contracts, and I’d like someone with X years of experience and knowledge of international law,” and the tool can help generate a short list. So it’s pretty exciting what you can build.
Jen Leonard: That’s amazing. And it’s a good full-circle moment to our definitions at the beginning—because the way you’re describing it, I think with Clause Builder, you’re moving from deterministic (with dropdown menus) to probabilistic (using Gen AI). So it can be done.
Bridget McCormack: Exactly right. It can be done. You can fine-tune it to the point where it doesn’t have to give you the exact same words in the exact same order to be a good clause.
Jen Leonard: Amazing work, Bridget. It’s just incredibly interesting to watch. And as someone who studies the theory behind these different ways to advance innovation, it’s fascinating to see someone actually doing the hard work to make it happen. Is there anything else that you wanted to share about your vision or your experience?
Bridget McCormack: I feel like I just dove into the deep end of a pool—and was lucky enough to have a talented team who said, “All right, I guess we’re diving into the deep end too. Let’s do it.” It’s been really helpful for me to collaborate with you, to put theory around some of what I’m doing. Sometimes you’ll say something and I’ll be like, “Hey, that does sound like what I’m doing.” So I appreciate that I get to collaborate with someone who’s so smart about the theory and the structure to put around what we’re up to.
Jen Leonard: Well, that’s overly kind and generous toward me—but I’ll take it, and thank you very much. And I think the last thing we had to talk about—which dovetails nicely with your experience—is sharing some resources with people where they can learn more.
I don’t think we need to drill too far down, because we’ve talked about Ethan Mollick before. He has a great Substack called One Useful Thing. It’s a great resource to be guided by. He talks in a recent edition about your employees being your R&D unit across your organization—and you talked about that earlier. So check out Ethan’s Substack for sure.
But the other resource I wanted to just close with is a recent LinkedIn–Microsoft study that came out in May. It’s about knowledge work and AI. And, you know, take it for what it is—Microsoft obviously has a vested interest in the Gen AI world—but it shows that over 70% of knowledge workers are actively using AI, whether your organization has an AI strategy or not.
And I think only 60% of organizations are actively thinking about a Gen AI strategy. Something like—this might be from a different study—only 5% of organizations have formal training in place to teach workers about generative AI.
So, I look at that as validation for organizations that are fearful or skeptical. This is here. It’s not going anywhere. Your people are using it. So you have an opportunity to shape how they use it—or they’re just going to go rogue and use it however they think is most helpful. What were your takeaways from that study?
Bridget McCormack: Yeah, that was the main one. And that’s exactly what I said to the judicial audience I met with last week. I think I’ve been saying it for some time, but now I can say it with citation to a study! I said, “You definitely have people on your teams who are using this technology. And if you want to make sure they’re using it in ways you think are appropriate, it might make sense to figure out a strategy for your organization—and also think about the cost of not putting a strategy together.”
And I think people are convinced by that in a way they may not yet be convinced by some of the upside. They’ll shut down pretty quickly when they hear about hallucinations and think, “Well, this isn’t ready for prime time. It’s not appropriate for lawyers.” But when you tell them people are already using it, that gets their attention. So I’m happy for the survey—I’ve been using it in my slides.
Jen Leonard: This has been helpful for me, because I’m going to fold it into that presentation next week where I’m not supposed to scare anybody. So this will be… maybe a little scary, but also really important.
And I think the other—I don’t have the exact statistic in front of me—but something like 46% of the people using AI started using it in the last six months. So the movement is accelerating. I was curious whether adoption would continue to rise, and it definitely seems like it is.
Bridget McCormack: Maybe we talked about this last week—I can’t remember if it was on the podcast or offline—but since we know from the Apple Dev Day that AI is soon going to be in all of our phones, I don’t see how organizations don’t develop a strategy right now. I mean, all of your employees have iPhones. They’re all going to be using it.
Jen Leonard: 100%. So thank you so much for sharing your experience, Bridget. Kudos to all of you. Shout-out to Jason—I feel like he’s my friend because we listen to the same podcast. So it’s nice to grow the community. And congratulations to him on his certification.
And thank you to everyone for joining us, learning along with us, and learning from Bridget and her team and their great work at the AAA. We’ll be back with our next edition to share resources, definitions, case studies, and just conversations we think are interesting—and that we hope other people think are interesting too.