Navigating Regulatory Risk: How AI Will Impact the Future of Legal Engagement

Summary

In this episode of 2030 Vision: AI and the Future of Law, hosts Bridget McCormack and Jen Leonard explore how generative AI is transforming legal and regulatory landscapes, focusing on two compelling examples: insights from the HBR article "Gen AI Makes Legal Action Cheap, and Companies Need to Prepare" and Adam Unikowski’s analysis of NEPA’s environmental impact statement process.

They begin with their AI aHa moments, highlighting how voice mode has revolutionized their interactions with generative AI, enabling multitasking and deep learning. The discussion transitions into defining synthetic data, explaining its growing significance for training AI models ethically and effectively.

In the main discussion, Bridget and Jen unpack how generative AI can disrupt regulatory processes, using examples like the crypto industry's AI-driven public commentary campaign to delay U.S. Treasury rulemaking. They also explore Adam Unikowski’s forward-looking suggestions for using AI to streamline environmental impact statements, improve public engagement, and reduce judicial burdens. The episode underscores AI's dual potential to drive efficiency while introducing new risks, emphasizing the importance of collaboration, education, and forward-thinking strategies.

Key Takeaways

  • Voice Mode as a Game-Changer: Voice mode allows seamless, conversational interactions with generative AI, enhancing productivity and enabling multitasking for legal professionals.
  • Synthetic Data Addresses AI Training Challenges: AI-generated synthetic data mimics real-world data, solving issues like privacy concerns and limited training datasets while scaling AI responsibly.
  • HBR Highlights Regulatory Risks of AI: The crypto industry’s use of generative AI to generate 120,000 public comments illustrates how AI can overwhelm regulatory systems, delaying rulemaking and introducing risks.
  • Preparing for AI-Driven Legal Action: Businesses and law firms should adopt proactive strategies like red teaming, workshops, and collaborative data sharing to mitigate risks posed by AI in regulatory contexts.
  • Generative AI in Environmental Law: Adam Unikowski’s analysis suggests AI could accelerate the creation of environmental impact statements, improve public engagement through dynamic interfaces, and reduce judicial review burdens.
  • AI as a Tool for Procedural Efficiency: Using AI for procedural tasks in administrative and regulatory law can save time, enhance fairness, and allow humans to focus on complex decision-making.
  • Collaboration is Key: Addressing AI-driven risks requires interdisciplinary partnerships, combining expertise from law, cybersecurity, data science, and ethics.
  • Opportunities for Innovation: Generative AI offers transformative possibilities for law firms, regulators, and agencies to innovate, especially in high-volume, repetitive legal tasks.

Transcript

Jen Leonard:  Hi, everyone, and welcome back to 2030 Vision: AI and the Future of Law. I'm your co-host, Jen Leonard, founder of Creative Lawyers, and I'm here with the wonderful Bridget McCormack, President and CEO of the American Arbitration Association. We are here, as we are every episode, to talk about developments in the landscape of generative AI and to think about what those developments mean for the legal profession.

As we do on every episode, we will each share what we call our AI Aha!’s—the things that we have used AI for in our personal or professional lives since our last recording that we found particularly delightful or magical. We'll then share a definition that people might not be familiar with, related to AI, that people might start hearing in their work and in their lives. And then we'll dive into a main topic and go a little bit deeper into something that we think is particularly interesting. And this week, we are going to talk about the question of what happens when legal action and engagement with the legal system costs next to nothing. In particular, we are going to focus today on what that means in the regulatory and administrative law landscape. But before we dive into that, Bridget, I'm so excited to see you! Where are you today?

Bridget McCormack: I'm excited to see you as well. I am in Las Vegas. There are two big conferences happening here that are relevant to my work: the e-Courts conference — which is the main court technology conference, always a good one — and the Construction SuperConference. Construction lawyers are a pretty innovative bunch. They have to figure out how to resolve disputes quickly to keep their projects moving, so we have a lot of interaction with the construction industry.

AI Aha! Moments: Why ChatGPT’s Voice Mode is a Game-Changer 

Jen Leonard: Well, thank you for joining us so early in Vegas. I know how busy you are, and maybe you can kick us off by sharing your AI Aha! for the week.

Bridget McCormack: Yeah, I have a fun one this week. I might be late to this party, but just in the past couple of weeks I started using voice mode so much more with ChatGPT, and I use it on my phone all the time. So I'm out walking or we're in the car, and my husband and I are like, "I wonder what that means or how do we understand that?" And I was like, "We have a friend we can ask." So we've now been using it for little things and bigger things. We were both trying to go back to cryptocurrency 101 because we felt like we were too old and kind of missed the cryptocurrency education. We felt like we needed to understand that now. So we did a cryptocurrency 101 session on the way home the other night from the west side of the state, and we're walking and he's annoyed that his Apple Watch keeps telling him he's near somebody else's AirPods and he doesn't know how to turn it off. I'm like, "ChatGPT will help you," and I just ask her, and as she's telling him, "You can do it." Voice mode is amazing because it's like having the information come in through your ears while you're trying to do something with your hands. It's like a superpower.

So I then realized you can do it on your computer as well. I had only been using voice mode on my phone, so now I've been using voice mode on my computer when I'm trying to have it teach me something. I'm obsessed with how I do my to-do lists and how I keep track of things. I'm a bullet journal enthusiast and I will always have my bullet journal. But for some of my bigger projects, my notes are scattered throughout my bullet journals and I'm on bullet journal five for the year, and there are important notes back there. And so I needed to start keeping track of some of the things that I'm thinking about where there are lots of connecting thoughts over a long period of time. So I have been asking voice mode on ChatGPT on my laptop to talk about the different tools and then to talk me through practicing with them. It suggested a bunch of different ones, and then I would pull them up. And right now it's sort of like my tech support and I'm practicing, and I'll say, "No, it's not working," and describe what's not working. Maybe I should have figured that out a long time ago, but it's been amazing.

Jen Leonard: For people who maybe haven't even explored ChatGPT on their phone, what's the difference between using ChatGPT as they're hearing about it and using ChatGPT voice mode?

Bridget McCormack: So I think you're asking — ChatGPT voice mode is literally instead of typing in a question or typing in a prompt, it's just speaking it, and the LLM just speaks back to you. It'll say, "That's a great question. Here's what I think you're trying to figure out. Here are my responses and ways to think about that. Was that helpful? Do you need more information? Can I help you think in more detail about any piece of that?" It's like having a thought partner that you're talking to instead of writing with.

And for me at least, it turns out to be a faster way of getting through something. And maybe people who are faster typers or think better through their fingers on their keyboards might not like it as much, but it's turned out to be a great tool for me in working through things. I can be on different screens — my ChatGPT window, I don't even know where it is because I'm in my Microsoft OneNote window — and we're just talking. So it's kind of cool.

Jen Leonard: It's very much like — if people have seen Her, the movie with Joaquin Phoenix — it's very much like his interaction with an AI, right? Joaquin interacts back and forth very naturally. I think it's new on the desktop version, right?

Bridget McCormack: It must be, because I think I would have noticed it before. I'm always in there looking for some new thing I can play with. Are you following the "12 Days of Ship-mas" from OpenAI? 

Every day for 12 days they're announcing a new product that's shipping live. Apparently yesterday they released something called Sora, which is their video model, and it basically... I'm not totally caught up, but it crashed their entire system because so many people tried to get access to it. Sam Altman has been updating everyone on social media like, "We're working on it, I promise. We had no idea we'd have this much interest and we'll keep working on it." 

So maybe this was one of the products they shipped in the 12 days of Ship-mas, but I'm not sure. It must be new, because I think I would have noticed it before. I'm always looking in there for new things.

Jen Leonard: I'm pretty sure I logged into ChatGPT on my desktop recently and there was a notice that popped up saying voice mode is now available on desktop. Like you, I really love interacting with voice mode. I feel like one of the most valuable things for me professionally is having a really great thought partner just to talk things through with. And there is a difference for me, cognitively, in saying things out loud and thinking them through in a back-and-forth rather than typing them out. So I really love that. 

And it's funny—I know he's controversial (people have many thoughts and feelings about him)—but I remember years ago reading an interview with Elon Musk, where in his very Elon way he was talking about engaging with technology, and he said that we would grow by leaps and bounds in our ability to use technology if we could get rid of our "meat sticks" as the interlocutor with the technology, meaning our fingers. I remember rolling my eyes when he said it, but he's right in the sense that when you eliminate that sort of friction of having to type things out, you can do a lot more, more quickly, and feel more closely connected with the technology, I think.

Bridget McCormack: Yeah, and especially if you're using it to coach you through something, you can be doing the other thing and having this conversation with it. I don't think there is voice mode on Claude or Gemini yet — I think that's right. Again, it's another head start that OpenAI has, which is kind of interesting.

Jen Leonard: Totally. And as you're saying that, I just remembered I used it for this purpose. (And I don't know what this says about me as a parent, but) I was driving with my daughter and she was asking me about genetics — like what genes are — and I could not figure out how to actually describe this in a way that would make sense to an eight-year-old. So I pulled up voice mode because I was driving and said, "Could you explain genes (spelled G-E-N-E-S) for an eight-year-old?" And it was so great. First of all, it spoke out loud as though it were speaking to an eight-year-old and said something like, "It's great that you're curious about genetics and science." Then it essentially said, "The human body is made up of tiny little pieces, and every piece has a recipe in it. That recipe tells the body how to combine different things for different results, as though you were baking cookies." It was incredible. I would never have come up with that description for what genes are.

Bridget McCormack: Yeah, the back-and-forth is really amazing. And then your daughter could ask it a follow-up question. We were doing this in the car on the way home (and I have it coming through my Bluetooth) and my husband's interrupting it to ask follow-up questions. And it's pretty good about... like, it doesn't care that we have two different voices asking it questions. It's like we're in a three-way conversation.

Jen Leonard: Yeah, and I can't remember whether you've seen Murder at the End of the World yet. There's an AI in the show and it's a hologram, and everybody in the show interacts with this AI through voice mode, but it has a sort of embodied version of the AI. And I always think that's probably pretty prescient — having something with a persona that embodies this voice mode a bit more.

Bridget McCormack: When I'm presenting about this technology, I use — credit to Zach Abramowitz — this quote of Ilya Sutskever's (who of course was the chief scientist at OpenAI and now has his own company doing safety work around large language models). He said, "The thing that surprises me the most about the models is that I feel heard when I'm working with them." I feel like that's even more so with voice mode. You and I have talked about how that's even true when you're typing prompts and it responds in a way that feels very much like a human — and it can be a little bit odd. It's even more so with voice mode, I find. I don't know if you agree.

Jen Leonard: 100% agree. It's much more like a human interaction. I can't remember the word for it, but they've built in a little bit of the verbal tics and things that make it sound really human. And you can interrupt it like you would a human. It's very cool. I love using it that way. And my use this week was actually much less interesting, so I'm grateful to you for raising voice mode — I use it constantly. I use ChatGPT as tech support when I cannot figure something out.

I have my own website, and I was trying to figure out how to integrate this podcast and another podcast so that when we release them, they automatically populate on my page. I just could not figure out how to do that on Wix. So I asked ChatGPT, and it gave me very simple directions — just copying and pasting the RSS feed right into it. I mean, these are the kinds of things that take a person who's a small business owner hours to figure out that now I can do in five minutes. It's incredible.

Bridget McCormack: That's really cool. We've talked about this in other contexts as well — it does feel like it's a superpower for small and medium businesses in general. Law firms too, right? Small and medium law firms really have some growth potential by harnessing this technology, which is exciting.

Jen Leonard: Definitely. So, great AI Aha!’s this week. And if you're out there listening and haven't tried voice mode on ChatGPT, we strongly recommend it. It's a very cool experience.

Definitions: What is Synthetic Data, and Why Does it Matter?

Jen Leonard: Moving into our definitions, one of the cool things about learning about AI is all the jargon that comes along with it. So, Bridget, do you want to describe what our definition for today is, which is synthetic data?

Bridget McCormack: Absolutely. So synthetic data is a term I started hearing in the context of generative AI maybe a year ago, when you first started hearing concerns about whether the frontier model companies were going to run out of data to train their AI models on. That was a relevant discussion, because it used to be that that was how you scaled your model, right? That's how they got better — you trained it on more data. And there was this concern that they were going to run out of data. Like, if they've already ingested the entire internet and every book they can get their hands on and — I guess in Google's case — every YouTube video, then there's nothing more to ingest and they're gonna hit a wall and not be able to scale anymore.

So I started hearing about synthetic data, which is data that's generated by the AI models themselves that mimics real-world data — which makes sense, right? I mean, like we were just talking about these long conversations we're both having with this technology, then all of a sudden there's a lot more data that it can train itself on to continue learning.

I don't feel like I have a sense of how much the frontier model companies are using synthetic data at this point to scale their models. But it certainly is — or could be, at least — a useful tool for thinking about responsible AI integration. Do you have a sense of whether it's just something that they're all using now? I don't know that.

Jen Leonard: I’ve heard most of the conversations talking about their forward-looking approach to starting to use synthetic data and making sure that it's data that can actually help advance the models and not be inferior to actual data. I started hearing about it the same way you did, when they were starting to run out, and I remember most of the commentary being sort of like that movie with Michael Keaton where they keep making copies of him and every copy gets worse and worse. 

The concern was that it was going to be really garbage in, garbage out. But the more I see now about the use of synthetic data, the more I see commentary about how helpful it can be — if we're responsible about people's creative works or privileged and sensitive information, we can produce synthetic data using the AI that is actually strong enough to train it but avoids using people's sensitive information. I think for lawyers that would be a huge leap forward in our ability to engage and build models that have legal applications without those concerns.

Main Topic: HBR Spotlight: Gen AI Makes Legal Action Cheap—Are Companies Ready?

Jen Leonard: Well, that brings us to our main topic for today. I thought synthetic data was an interesting definition because it touches on the proliferation of all sorts of new information and content in our systems. Today we want to talk about what happens when any of us can produce new content for any purpose at the click of a button — or maybe without even clicking a button, maybe just using our own voices — and how that manifests in spaces like regulatory agencies that use a lot of public commentary to decide how to shape policy.

We're going to talk about two different pieces today. The first is an HBR article by Sean West and Stephen Heitkamp called Gen AI Makes Legal Action Cheap and Companies Need to Prepare. Then we'll compare it with a recent blog post by an appellate litigator you brought to our attention, Bridget — Adam Unikowsky — who talks about this in the context of the National Environmental Policy Act. So let's start with the HBR article. Maybe you could summarize it for us, Bridget?

Bridget McCormack: Yeah, absolutely. We talked about this article when it came out, which was maybe a couple of months ago at this point. I can't tell you how many people sent it to me at that time. It's an interesting window into this issue of whether reducing the barrier to entry to the legal process will be good or bad, right? This is one flavor of it. (And we'll reserve for future conversations what it means in other contexts, like personal injury law and other ways in which reducing the barrier to entry could be a really good thing or maybe a really bad thing — interesting conversations, I think, around that.) But this one's fascinating.

I'm going to take a step back before I describe what the HBR piece focused on — a specific case — to do, like, Ad Law 101, which is really dangerous because I never even took Ad Law.

But I feel like it should be a required class because it's such an important source of law in our society. If you don't understand how administrative agencies — both at the federal and the state level — function and what an important source of law their work is, I think it's hard to understand the operating system of our society generally. Some people call it the fourth branch of government, right? Agencies are where a lot of really specific regulatory work is done by experts — experts in whatever the agency's mission is. 

One requirement for new regulations from any agency is they have to publish the text and also their justification for the rule, so the public can see what the proposed rule is and why the agency thinks it's important. Then there's this public comment period — literally an open public comment period, usually 30 to 60 days, and people can write in. Often it's experts — I don't know how many regular people are checking the Federal Register every day for new rules to comment on — but if you're in an industry affected by the rule, you submit a comment, either in support or opposition or making suggestions for changes and pointing out consequences downstream that the agency might not have thought of, potential legal concerns. There are entire legal practices around this process, right? There are lawyers who do this and only this. And then the agency is supposed to review all those comments and synthesize them. If there's a substantial factual or legal concern, the agency has to address it in its final rulemaking. Then that process can be the subject of litigation and often is the subject of litigation after the fact.

Okay, that was a little bit boring, so let me get back to this HBR story and example. The U.S. Treasury Department was proposing a cryptocurrency disclosure rule. I don't remember exactly what the specifics of the disclosure were that it proposed requiring, but there was resistance from the crypto industry. A group that called itself LexPunk Army — which was a sort of legal and tech group — used an AI bot and enabled really quick and widespread public commentary about this rulemaking. They generated 120,000 comments, which is significantly more than you normally see in a proposed regulation like this. It meant that the rule was delayed significantly because the agency had to work through all 120,000 comments and be able to respond to them and synthesize them and then take them into account in its final rulemaking process. And so it was this first practical example of what I've heard — what lots of lawyers from the very beginning worried about — as one of the doomsday scenarios with this technology and its potential legal use cases. If you can all of a sudden file 120,000 comments — and frankly it worked; it delayed the rulemaking and changed the outcome — just because you now have a technology that can make that happen, and it wouldn't have happened before the technology.

That's the question I've heard from the beginning: is that good or is that bad, right? Do we want that outcome? Whether we want it or not, this is a perfect example that it's here. This is now possible. And the HBR piece was really focused on what you should probably be doing and thinking about — what your strategies are if you are a business that has to think about these kinds of regulatory challenges. I actually think the authors are thinking beyond regulatory scenarios: what does it look like to prepare yourself for this new set of risks that hasn't been on our radar for businesses or law firms until now? That was a pretty interesting part of the piece. I'll pause for a minute — was there anything else that I should have covered before we discuss some of the recommendations?

Jen Leonard: I think that was great. And just to put a fine point on one comment you made about the crypto industry: in the article, the authors say this could be just evidence of an industry that happens to be really facile with technology using its skills to inundate the system, or it could be a harbinger of what's to come in other regulatory landscapes. The authors say they think it is the latter rather than the former. But because it's sort of an early case of people who do use technology well, that gives us a little bit of time to think about what happens when this occurs across all different regulatory landscapes that we might not anticipate.

Bridget McCormack: The authors are really thoughtful in placing this specific risk within the larger geopolitical and economic disruption that businesses are operating in today. They write a lot about the more general geopolitical risks. And they make some suggestions — or at least they say you should take it seriously, right? Like, don't write it off as, "Okay, the crypto people figured out how to do it, but that's not going to happen in my business because we're not... whatever." Their point was you have to take this seriously as you prepare for this kind of risk, because it's absolutely possible in any industry.

So, I don't know, do you have a thought on — if you're advising a business or a law firm and now you're advising them to put this on their risk register... we now have to have, on our risk register, the possibility that all of a sudden 120,000 lawsuits could be filed against us tomorrow — what are the strategies you would advise them to think about in preparing for this kind of risk?

Jen Leonard: The authors did something that I know you and I have taught in classes we teach together — and I love that they did this — which is being inspired by other industries with analogous situations. They draw from the cybersecurity landscape and compare it to, I think, a distributed denial-of-service (DDoS) attack. The way that cybersecurity experts deal with that, as I understand it from the article, is pooling and aggregating information across companies in a centralized database (they list a few in the article that are well known) so that if you're operating a company (Company ABC across the world) and I'm at a different company, I can benefit from your experience and be more prepared. Collectively we can come up with a counter-offensive, I guess, to something like this. And they talk in the article about how uncomfortable something like that would make lawyers, because we keep everything very, very confidential and we handle privileged and sensitive information. But I think the point they're making — and one that you and I have talked about in different contexts — is that we're entering a totally new era of risk in the way that we think about it as lawyers. Our discomfort comes from trying to apply the frameworks that we used in the prior era to this new era. But we're actually doing ourselves a disservice if we don't reshape the way we think about sharing information, for example, to protect our clients from risk. So I think in the long term, their counsel would be for the whole profession to rethink the way that we consider information.

Bridget McCormack: I agree with that — sharing information with one another, yeah, at least. Or think about the ways in which you can anonymize and share your information, which also has gotten a little bit easier with this technology that can take unstructured data and clean it in any way you want. So yeah, it feels like another collective action problem to me, though. I think that their suggestion that we learn from cyber is a really good one. But we've talked about this in other contexts — I think for lots of cultural and understandable reasons, there are significant silos within the legal profession.

And I don't know who takes the lead on this kind of response management project, right? Who's going to say, "Hey, we all are running these legal businesses that now have this new significant risk, and the best way to guard against it is for us to think about a strategy that breaks down our silos." I don't know who does that. I feel a little bit worried about where we find the leader for that change management process within the profession.

Jen Leonard: It reminds me of the conversation around outside counsel guidelines and coming up with uniform, anonymized sets that you can use to sort of accelerate gen AI adoption with your outside counsel, right — coming up with structures that create different incentives for collective action. I don't know of anybody who's thinking that far ahead, though the article does talk about... I think they referenced DLA Piper as a firm that is currently working with its clients to do red teaming (which is a term that we talked about before on the podcast) to sort of simulate what kinds of mass-produced claims or regulatory filings might be submitted in the realms that affect their clients. And to me, that feels like an innovation that would be comfortable for law firms and also potentially provide a new service offering to clients that are trying to manage all of this risk.

Bridget McCormack: Yeah, it's like — you'll know better than I do — I don't know how often law firms are doing tabletop exercises around enterprise risk management. Businesses do it all the time, right? I mean, we do tabletop exercises a lot, and it turns out to be quite helpful when you do have an emergency. I don't know how many law firms are doing that besides DLA. It was interesting that DLA is doing it.

Jen Leonard: Yeah, I have not heard of many doing it — although maybe they're out there and I'm just not aware of them. But it is a great way to sort of scenario-plan and future-cast what could happen under different sets of circumstances. It also seems like, because of the things that the authors point out — all of the geopolitical chaos and changing regulatory structures and frameworks — all of the law firms that are listening to this are serving corporate clients that are dealing with this complexity and, just as with generative AI strategy generally, they need help and need support in thinking this through. They don't have the time to now try to figure out what this LexPunk Army means for their risk assessment. So it strikes me as a very forward-looking approach that DLA Piper has. And I think there are lots of opportunities like that emerging for firms now.

Bridget McCormack: Yeah, it's interesting. I would actually think about how to use the technology to think about this new segment of enterprise risk management that you're going to have to consider, right? Great startup idea — those of you who are out there looking for a startup idea, this is something that businesses and law firms are going to need. I think it's pretty interesting.

Jen Leonard: And in the meantime, I would say a couple of thoughts on what corporate legal departments can do or what law firms can do until there's some sort of massive shift in how we think about it: Education first and foremost. I think most people are probably not even thinking about this as an outcome of generative AI. They're thinking a lot about gen AI, but not in this respect. I think also, as we sort of just suggested, workshops and tabletop exercises with your clients to walk through what the risks could look like and how you might respond together. And then I think having some sort of protocol for what happens when there's an incident like this so that you're not... you might need to wait until something actually happens to have something live to respond to, but you want to have a plan in place in advance of what that response might look like.

And all of that would require collaborating with people who have different types of expertise: cybersecurity expertise, data science backgrounds, ethicists — that kind of thing. And then I would just say, to your point about collective action, find industry opportunities where people are coming together to share how they're responding to this risk. Those would be the steps that I would take — and have a communications plan ready to go if this were to happen in your realm.

Bridget McCormack: Yeah, it's yet another example where lawyers are going to have to get comfortable with the fact that we need to collaborate with people who have other expertise. We're not going to solve this one on our own. We need partners outside of our expertise to really manage it, I think.

Main Topic: Environmental Law Meets AI: Discussing Adam Unikowski’s Case for EIS Automation

Bridget McCormack: One other — sort of the other side of the coin — from Adam Unikowsky's Substack this week: he wrote about this D.C. Circuit case which is Marin Audubon Society v. FAA. He wrote a pretty interesting piece about environmental impact statements and how generative AI might be useful on the agency side of rulemaking. So do you want to summarize a little bit about Adam's piece and the takeaways?

Jen Leonard: Sure, I'll do my best to try to summarize what was a really great piece about an act that I wasn't familiar with — the National Environmental Policy Act (NEPA) — which, as I gather from Adam's piece, is an act that requires agencies to prepare environmental impact statements, or EISs, in advance of any sort of federal development project. The goal of the EIS is not necessarily — as it's structured under this act — for these agencies to create a record that judges would review for substance to determine whether the project should actually happen at all. The goal really is to educate the public so that the public can engage with the agencies and do what we were talking about in the last segment, which is try to work with the agencies to shape what the public thinks should happen around these projects. It's an entirely procedural versus substantive framework.

But judges end up getting involved in the litigation that follows, sometimes, the creation of these EIS reports when plaintiffs who are unhappy with a proposed federal project sue. And Adam's point really is that even though the lawsuits are ostensibly about the EISs themselves, the real desired outcome on the part of the plaintiffs is to slow down or stop the project altogether, right? And the creation of these EISs can take years, which can stifle development of projects that maybe we want to happen. And the actual EISs themselves are done — he lays out the one in the D.C. Circuit case and pulls a few samples — they're done by experts, they're very thorough. In that case, I think it was 3,600 pages of detailed information about the impact of this proposed project.

And Adam talks about how AI could enable us to accelerate two pieces of this work. He has some interesting ideas about how we could reshape the way the NEPA framework currently operates and the friction it creates by having the AI actually assist in producing the EIS in the first place, so it doesn't take years and years to produce. He sort of has three different pieces to this, I think. There's that piece — producing the EIS; making the EIS dynamic so that even the most interested person in the public is not going to be able to read 3,600 pages about a project. But if we use a ChatGPT-like interface, we can upload the EIS and ask questions of it so that we can focus on learning what is really important to us as members of the public. So having a more dynamic interface involved.

And then I think his boldest suggestion — and you can correct me, Bridget, if I'm misstating any part of his argument here — is removing the judicial review piece of this altogether, because the judges are not reviewing these EISs for their content to determine whether the content was properly designed. They're really reviewing the procedure through which the EISs were produced and whether it's sufficient. And Adam argues that in that specific use case, AI could do that work without engaging judges. But I'm going to stop, because it was a really complex piece and I want to see what I've misstated.

Bridget McCormack: Yeah, well, first off I encourage everybody to read Adam's piece. He's such a great writer. And you were saying this earlier, Jen — he writes about complicated legal concepts in a way that's really clear and simple. You don't have to be an expert in rulemaking or the substantive case to understand the arguments he's making. So I really encourage you to read it. But yeah, I think you got it. I mean, I didn't know anything about this process myself either; I've never been an environmental litigator. But apparently the time to complete an environmental impact statement can be four and a half years, or many take over six years.

And no project can proceed until it's complete. And so you could imagine having a position where a six-year delay is actually what you want. But the truth is there's pretty good evidence that clean energy projects actually have a harder time making it through this process than oil and gas, because they have exemptions. So it isn't clear that the point of the process is actually serving the interest it was intended to serve. But, you know, Adam says there are really good reasons to do it if we can figure out how not to weaponize it, right? So this technology might really be the way to figure out how not to weaponize it. If the technology can produce a very thorough environmental impact statement in a short timeline, and then produce it in a way that people who actually do know something about the particular project — the particular area, the effects it would have on, in this case, the mating places for grouse?

But you know, the people who have information about that — if they could interact with it and really get that information to the agency, and then the agency could take it into account and make a change that was kind of win-win for everybody — that feels like a pretty good outcome. Isn't that why we have the rulemaking process? It's so that we can get important information to an agency so that the experts in this area can make a better decision about what to do with it. If the process has just become one that is a way to delay and nobody actually can read 3,600 pages and interact with it in a way to make the process better, then what are we doing, right? 

And so the fact that the technology could actually help serve the purpose of the process — or at least its intended purpose — feels kind of exciting. The part that people, I think, will be the most worried about is having the technology actually review the procedural review. But again, because it's not substantive, it's not like you need the technology to say, "Yes, in fact, this is going to have bad outcomes on the grouse". If that were the question, then I think you need humans. (Not that judges are experts in grouse mating habits, but they can review the work of the experts and make sure there isn't a legal problem with the project proceeding.)

But if it's really just trying to... if the only question is, "Has the agency done procedurally what it's required to do so that the public can interact and move on to the substantive part of the process?", the AI can probably do that as well as any human — maybe even better. That's the kind of project that you could build today with this technology at home, with a custom GPT-like tool, because again, it's a niche data set, right? You could teach it: these are the elements that an EIS has to have. You could train it on perfected EISs. You could imagine building a tool that could do that very, very effectively and, again, save judges' time for the kind of work that judges really need to do, right, that we're not going to want to defer to technology. 

And I'm not saying you would — I think there are steps along the way in this process. You could imagine building a generative AI tool that could take the first cut, right, and say, "We've reviewed it against our EIS-GPT and we see these three problems," and then the agency could fix those three problems. It would just help the agency do what it needs to do. Or, "We think it's met all of the things on the checklist," and then you could imagine going to a judge to either confirm that or send it back for additional procedural work. It just feels to me like it's the kind of thing that could do some of the work that probably isn't the best use of our judges' time — you know, save the judges for the stuff that we really need humans doing, where judgment is really important. So I think it's a very cool idea. And I am really interested in the possibilities for governments and courts — as you know (we've talked about this a million times, you too, I know) — how they can get the most out of this technology. And I worry a little bit — I always do — that they could get left behind if the resources aren't there to help them figure out how to use it to protect the public, which is what they do.

Jen Leonard: Yeah, and it makes me think back to... I think last year Ezra Klein wrote a piece for The New York Times about infrastructure and how long it takes in the US to build any federal project. And I think he made the argument that lawyers really are the problem — that we put up so many different governance structures that they all overlap. And the result is some of the things that Adam's talking about. To me, I was excited when you shared this with me, because I don't think you want to be less deliberative — I think there are really good reasons to have structures in place. 

But I would hope that lawyers would start thinking about, at the outset of the creation of an administrative framework, how might we leverage AI in a way that actually enables — to your point — enables what we're trying to achieve here and doesn't end up creating this complexity that people can use, either intentionally or unintentionally, to delay things forever. And then we could elevate the esteem of the profession in terms of accelerating development and contributing to the production of new projects.

Bridget McCormack: Yeah, I don't know. I think there's a lot of exciting upside potential. Again — startup founders out there, government, LLM tech — excited about it.

Jen Leonard: So, based on that one example from the NEPA context, how can we think about a framework that can guide us to find other places in the law or the administrative agency realm where we could think about these solutions? What do you think are the hallmarks?

Bridget McCormack: Adam does a nice job thinking about this, but there's so much more thinking we can do around it. The reason why this might be an easy use case, if you're building a continuum, is there's no due process rights at stake, right? There's no individual liberty that's called into question — it's simply a procedural step of a regulatory process. I'm always thinking about this in all the different kinds of disputes that we have, and where along the way you could build in... I'm thinking of them as modules (or, my talented CTO Diana always calls them "lily pads"). Like, what are the lily pads along the way where you can have the technology assist the human decision-maker, to save the human decision-maker for the important stuff that only the human can do?

So you could imagine steps along the way of any process where some of it is just procedural, right? Like, you need to understand a timeline of a particular dispute or a timeline of a regulatory process so that people can engage with it and agree or disagree. Having a human judge or her clerk put that together isn't the best use of talented lawyers' time. You could imagine having a lily pad where the technology did that and said to the parties, "Do you agree with this?" I'm thinking now in the ADR process — I think for courts to have that much interaction in the middle of a case is harder given their limits. But you could imagine sending the parties the tech-produced timeline of events and saying, "This is the timeline that seems to reflect this project or this dispute. Tell me what's wrong with it," right? And then the human could review the input from the parties, edit it, update it, and get to a place where everybody agreed.

I think the possibility that we can take parts of a dispute resolution process and get agreement along the way will really help us get better dispute resolution at the end, right? If we all agree along the way that, yep, this is the timeline, these are the different legal arguments we're making, these are the responses — we all agree about the universe of things in this dispute, then we're all going to come out of that process feeling like we were heard, we were understood, even if we didn't prevail. All the literature on procedural fairness, I think, is really important right now. You can build in so much more procedural fairness when you use these tools along the way. That's another reason why I'm excited about some of those lily pads along the way. I don't know... Do you have other thoughts on that topic?

Jen Leonard: I don't think I can add anything to what you just described. I share your excitement. I’m always looking for ways that the profession can really elevate the problem-solving for the parties — whether you're in a dispute context or a regulatory context — helping the public actually engage with agencies in a way that shapes policy for the public, and finding these cool opportunities to leverage technology to elevate the people that matter in all of this is exciting to me. And you just offered the perfect segue to a future episode that we're going to have around disputes themselves and how we can be leveraging Gen AI — and how a few really cool startups are doing that on the plaintiff side and the defense side. And then we can also talk about how it unfolds in ADR. There are just really, really interesting and exciting opportunities on the horizon.

I loved this conversation. It did show both sides of the coin, right? Technology can be used for ill or for good, and we're excited to have conversations about how to use it for good. Well, thank you, everybody, for listening to this episode of 2030 Vision: AI and the Future of Law. We will see you on a future episode. And until then, be well.