AI Transformation in Law: What Legal Leaders Should Know

Real-world lessons on culture, training, and governance from the Future Dispute Resolution – New York Conference 

Successful AI transformation in legal practice doesn’t start with software; it starts with people. Organizations that focus first on readiness, learning, and governance are better positioned to integrate AI in ways that enhance, rather than disrupt, professional judgment. 

Legal innovators recently shared what that looks like in practice at the 2025 American Arbitration Association® (AAA®) Future Dispute Resolution – New York Conference, offering concrete lessons for leaders navigating AI adoption today. 

Culture Comes Before Technology 

AI transformation succeeds when culture leads, not technology. Panelists emphasized that organizations are most successful when they invest in governance, collaboration, and shared learning before scaling or selecting specific AI tools. 

Moderator and AAA Vice President of Innovation Linda Beyea pointed to the AAA’s long-term approach to innovation as a key reason it was ready for the current wave of generative AI. The organization’s foundational work, she said, “really prepared [it] for this GenAI moment,” underscoring that readiness is built over time, not achieved through rapid adoption. 

That preparation matters because AI capabilities are evolving so quickly and powerfully. As Hugh Carlson of Three Crowns cautioned, “You don’t want to engage what is a rapidly evolving and very powerful technology, without the benefit of some type of expert guidance.” Tool adoption without organizational maturity, panelists warned, can introduce risk rather than reduce it. 

Cultural barriers remain especially acute in legal environments, where risk aversion can discourage experimentation. Jennifer Leonard, founder of Creative Lawyers, stressed that innovation depends on creating psychologically safe spaces where professionals can share lessons and surface concerns without fear. “You have to balance risk with the need to share across the firm what they’re learning,” she said. 

What this means for leaders: 

AI isn’t plug-and-play — build a culture that supports experimentation, psychological safety, and shared accountability before scaling tools. 

Hands-On Learning Builds Trust 

Panelists agreed that trust in AI is not built through policy documents or high-level training alone, but through use. Professionals gain confidence when they can test tools directly within real workflows and see how the technology behaves in context. 

The AAA’s national rollout of Clearbrief, an AI-assisted legal writing and evidence-verification platform integrated directly into Microsoft Word, to its panel members offered a practical example. Rather than relying on passive demonstrations, the AAA emphasized experiential learning through Clearbrief Academy, a three-part training series designed around hands-on engagement. 

Participants worked directly with their own case documents to verify sources, check citations, and generate AI-supported summaries, often in interactive, one-on-one sessions. That approach, panelists noted, allowed users to evaluate the tool’s strengths and limitations for themselves. 

“The feedback that we’ve gotten has been so positive about the trainings, about the experience, about how it’s transforming the work that the arbitrators are actually doing,” said Clearbrief Founder Jacqueline Schafer. 

Within weeks of launch, more than 400 active users logged thousands of document interactions. The volume and depth of engagement reinforced a key lesson: trust grows faster when professionals can apply AI to real problems, rather than abstract examples. 

What this means for leaders: 

Prioritize hands-on, context-specific training that lets users experiment with tools on their own work — that’s how trust is earned. 

Innovate Boldly — and Plan for Breakdowns 

Panelists cautioned that without structure, AI innovation risks eroding trust rather than delivering meaningful gains. Responsible AI adoption, they argued, requires organizations to think not only about what technology can do, but where it might fail. 

Several speakers returned to the importance of understanding the “Failscape”: identifying how systems could break, misfire, or be misused before they are deployed at scale. As Joshua Walker of System.Legal put it, “Understanding the Failscape is job one.” Anticipating breakdowns early allows organizations to design safeguards, accountability mechanisms, and response plans before problems arise. 

Rather than avoiding experimentation altogether, panelists advocated for disciplined experimentation  testing AI in environments where risk is understood and managed. Leonard described how her team tiers AI use cases by risk level, creating pathways for safe learning.

“We have to be tiering our risk assessment,” she said, “and we need to be finding yellow light and green light cases where we can be experimenting safely.” 

Low-risk “green light” projects allow teams to build familiarity and confidence, while moderate-risk “yellow light” initiatives require added oversight, piloting, or review before broader adoption. This approach enables organizations to learn from real use without exposing themselves  or their clients  to unnecessary risk. 

What this means for leaders: Map out risks early, define safe spaces for testing, and build structures that balance innovation with accountability. 

The Session’s Bottom Line: Leadership, Not Tools, Will Shape the Outcome 

The discussion highlighted a shift from experimenting with AI to executing on its use, as leaders focus on building the structures and learning needed to support everyday legal work. 

Panelists made clear that AI transformation is not driven solely by technology decisions, but by the choices organizations make about readiness, responsibility, and trust. Those choices will determine whether AI becomes a meaningful support for legal work or a source of new risk. 

As AI continues to evolve, the challenge for legal leaders is no longer adoption; it is stewardship. 

Explore more insights from the 2025 Future Dispute Resolution – New York Conference by downloading the full conference report. 

Download the AI & ADR Insights Report

December 16, 2025

Discover more

AI Transformation in Law: What Legal Leaders Should Know

AI in Arbitration: Unleash It or Restrain It? What Practitioners Need to Know

Faster. Better. Smarter: Inside AAA’s® Journey to an AI-Native Future