Ethical ADR in the Age of AI

Neutrality. Fairness. Trust. These are the values that underpin alternative dispute resolution. Now that AI is embedded in legal practice, how do we ensure they still hold? 

This question framed “Engineering Ethical AI: Applying and Evolving Ethical Canons in ADR,” a session at the 2025 American Arbitration Association® (AAA®) Future Dispute Resolution – New York Conference. Panelists examined how long-standing ethical principles must adapt as technology begins to shape how arguments are formed, evidence is evaluated, and professional judgment is exercised. 

Drawing on the Code of Ethics for Arbitrators in Commercial Disputes, the Model Standards of Conduct for Mediators, and emerging institutional guidance — including the AAA-ICDR® Guidance on Arbitrators’ Use of AI Tools and the AAAi Standards for AI in ADR — the discussion focused on what responsible AI use looks like in practice.  

Ethics Now Includes Understanding the Technology 

One of the panel’s clearest messages was that ethics and technical competence cannot be separated. As generative AI tools become more powerful and accessible, lawyers and neutrals must understand how these systems function  and where they fall short  rather than treating them as simple productivity aids. 

“There’s not a clean separation between ethical discussions and discussions of competency,” said Sam Prevatt of Freshfields. “It is now an ethical obligation of everyone in the profession to become competent in the use of GenAI.” 

That competence includes understanding how these systems generate outputs, where they fail, and how hallucinations can appear convincing, even authoritative. Panelists emphasized that responsibility remains firmly human: verifying sources, checking accuracy, and exercising independent judgment. 

For neutrals, this responsibility can extend to disclosure. Daniel Gonzalez of Hogan Lovells urged arbitrators to be specific and proactive, particularly at the outset of a case, about how AI tools are used. Generic references to “transparency,” he suggested, are insufficient. Parties should know what tools are in play, what they are being used for, and how confidentiality and data security are being handled. 

Trust, the panel emphasized, does not come from the technology itself. It comes from professionals who use it thoughtfully and openly. 

Fairness, Not Fear, Should Guide AI Use 

The panel also explored the practical challenges AI introduces into arbitration and mediation, from confidentiality concerns to potential imbalances when one party relies heavily on AI and the other does not. 

Mediators described real-world situations in which self-represented parties used AI tools during caucus, raising new questions about disclosure, consent, and neutrality. Advocates pointed to an emerging reality: filings that include hallucinated citations, sometimes unintentionally, which force lawyers to decide when and how to address those errors. 

“In evaluating when to confront the use of AI, you’re not going to ever want to jeopardize the situation for your own client,” Gonzalez said. “You also want to fairly consider the nature of the use of AI by your opponent before going in guns blazing.” 

Rather than responding with blanket prohibitions, panelists broadly agreed that fairness  not fear  should shape how AI is integrated into alternative dispute resolution (ADR).  

“I do not think there should be an option to object outright to the use of AI in your proceeding,” Prevatt said. “It would be like going back to 1983 and objecting to the use of personal computers.” 

Instead, the panel pointed to institutional guidance  such as the AAA-ICDR’s recommendations and emerging model orders  as essential tools for helping the field navigate these issues consistently and transparently. 

The Subtler Risk: AI Shapes Human Thinking 

Beyond procedural questions, the panel explored a quieter but equally significant risk: how AI can influence human judgment. 

Professor Robyn Weinstein of Cardozo Law said that even experienced professionals are vulnerable to cognitive bias when working with AI systems. Because these tools often produce fast, polished, and confident outputs, users may be more likely to accept them at face value, even when they are flawed.  

“When we see output from these systems, we are more likely to be anchored by them, more likely to believe them,” she said. “As mediators, if we are using the tools to assess damages or assess the cases, we might be anchored ourselves in the recommendations we’re making.” 

Weinstein also referenced research suggesting that reliance on AI tools can reduce critical thinking in certain contexts. 

“I think we have to be concerned about eroding our critical thinking,” she said. “Not taking shortcuts…actually being proactive in how we’re assessing it, and not just focusing on the efficiency piece.” 

The panel emphasized that ethical AI use in ADR is not just about tool selection. It is about mindset. Reflection, skepticism, and independent judgment remain essential, especially in processes built on trust. 

What This Moment Demands from ADR 

Rather than treating ethics as a constraint, the panel framed ethical frameworks as enablers, structures that allow new technologies to be integrated responsibly. 

AI will continue to reshape dispute resolution. Whether that change strengthens or weakens trust will depend not on the tools themselves, but on the choices professionals make about disclosure, fairness, competence, and human judgment. 

Explore more insights from the 2025 Future Dispute Resolution – New York Conference by downloading the full conference report

Download the AI & ADR Inisights Report

January 15, 2026

Discover more

Ethical ADR in the Age of AI

Reimagining Dispute Resolution Through the Client Lens: Strategy, Technology, and Outcomes

AI Is Opening New Doors for Legal Professionals. Here’s How.