Quantifying AI’s Impact on Dispute Resolution 

From Prevention to Process to Outcomes 

For decades, dispute resolution has been evaluated largely through operational metrics — time to resolution, cost, and procedural efficiency. Increasingly, AI is changing that equation, not simply by optimizing those metrics, but by enabling earlier intervention, fairer process design, and more pathways to resolution before disputes fully crystallize. 

That perspective framed “Quantifying AI’s Impact: Real-World Applications in Prevention, Process, and Outcomes,” a session at the 2025 American Arbitration Association® Future Dispute Resolution – New York Conference. Panelists explored how predictive analytics, transparent system design, and human-centered use cases are reshaping AI's role across the dispute lifecycle. 

Drawing on real-world deployments in insurance, construction, mediation, and online dispute resolution, the discussion focused on where AI is already delivering measurable value, and where caution and design discipline remain essential. 

Prevention Is Becoming the Point 

One of the session’s central insights was that AI is shifting dispute resolution upstream. Predictive tools that analyze historical claims, costs, timelines, and outcomes are enabling earlier intervention, often before a dispute formally exists. 

“The users of our predictive algorithm are able to generate accurate, explainable predictions on how the process is expected to evolve,” said Yariv Lissauer of Canotera. “By gaining this information, they’re able to cut to the chase and reach resolution way faster, and with less pains associated with the litigation or the dispute resolution process itself.” 

Already in use across industries such as insurance and construction, these systems help counsel and case managers assess exposure, triage claims, and simulate conflict scenarios before escalation. Rather than replacing judgment, panelists emphasized, predictive AI provides a clearer map of risk, allowing humans to make better-informed decisions earlier. 

“Most dispute resolvers are like the ambulance at the bottom of the cliff,” said Colin Rule of ODR.com. “But it’s a lot better to build a railing at the top of the cliff.”  

Gretta Walters of Chaffetz Lindsey added that prevention is increasingly embedded directly into contracting workflows. Companies are now using AI-enabled contract management tools to flag inconsistent language and high-risk clauses during drafting, signaling a future where dispute resolution and dispute avoidance continue to converge. 

“I think we’ll see more and more use of AI tools — and we’re already seeing that — where clients and companies are using them to resolve issues before they even arise,” Walters said. 

Designing AI for Trust and Neutrality 

While adoption is accelerating, panelists agreed that the harder — and more important — work lies in design. 

Rule reflected on lessons from early online settlement platforms, where seemingly minor design choices unintentionally influenced outcomes. Even well-intentioned systems, he warned, can “put a thumb on the scale” if neutrality, data quality, and explainability are not addressed from the outset. 

“We need to build AIs for us, by us,” Rule said. “Where we know the data is clean, we’ve tested it, and nobody’s put their thumb on the scale.” 

The takeaway was not skepticism about AI itself, but about opaque tools built for consumer markets and retrofitted for dispute resolution. Panelists argued that alternative dispute resolution (ADR) professionals must actively shape the systems they rely on, rather than outsourcing core procedural values to black-box technologies. 

Rule proposed the creation of independent audit mechanisms, potentially housed within organizations such as the International Council for Online Dispute Resolution, to test, de-bias, and validate emerging tools. These safeguards, he suggested, are essential to preserving neutrality, transparency, and due process as AI becomes more deeply embedded in dispute systems. 

Where AI Fits Best: Supporting, Not Deciding 

If arbitration raises complex questions about prediction and fairness, panelists largely agreed that mediation offers the most natural environment for responsible AI use. 

Mediation’s collaborative and creative nature aligns well with AI’s strengths: summarization, reframing, brainstorming, and organizing complex information. Used thoughtfully, AI can expand the range of possible solutions without replacing human judgment. 

“Don’t say to ChatGPT, ‘What is the resolution to this dispute?’” Rule said. “Say, ‘What are 20 ideas for resolving a barking dog dispute?’” 

Panelists emphasized that AI works best in mediation when it supports exploration rather than decision-making, helping mediators identify common ground, generate options, and manage information more efficiently so they can focus on empathy, communication, and trust-building. 

Richard Silberberg of Silberberg Dispute Resolution LLC highlighted another underexplored application: training. AI-powered simulations can create realistic, feedback-driven practice environments that help mediators refine listening, negotiation, and reframing skills in ways traditional role-play cannot. 

“I think a great use of AI in the mediation space, which I don’t see talked about very much, is for training mediators,” Silberberg said. 

What This Shift Means for ADR 

Across prevention, process, and outcomes, the session underscored a consistent theme: AI’s value in dispute resolution is not about automating judgment. It is about improving insight, expanding options, and enabling earlier, fairer intervention. 

As AI continues to mature, its impact on ADR will be measured less by novelty and more by design discipline — whether systems are explainable, neutral, and aligned with the profession’s core values. 

Explore more insights from the 2025 Future Dispute Resolution – New York Conference by downloading the full conference report.

Download the AI & ADR Insights Report

January 26, 2026

Discover more

AI Arbitrator: Guardrails and Oversight

AI Arbitrator: The Human Element

AI Arbitrator: Trust by Design