Real-world lessons on responsible AI use from the Future Dispute Resolution – New York Conference
Artificial intelligence has moved from the margins to the mainstream of legal practice. From document review and procedural summaries to drafting support and citation checks, AI is reshaping how disputes are prepared and managed. And as adoption accelerates, practitioners face a critical question:
How can we leverage AI’s efficiency without compromising fairness, trust, or professional judgment?
That question took center stage at the 2025 American Arbitration Association® Future Dispute Resolution – New York Conference in a candid debate on “Unleashing AI or Restraining It,” where panelists examined real-world examples of how AI is reshaping legal practice, weighing its promise and its pitfalls.
The discussion distilled into three essential takeaways for practitioners navigating AI adoption today.
Human Judgment Must Still Lead — Even as AI Scales Efficiency
Panelists agreed that AI has proven valuable for procedural histories, document organization, citation verification, and large-scale evidence analysis. These efficiencies allow practitioners to move faster and work more effectively with complex record sets.
But they emphasized legitimacy in arbitration still rests on human judgment. “There are a lot of sections of arbitral awards that really lend themselves to the integration of AI,” Sarah Reynolds of Kaplan & Grady said, “but at the end of the day, the arbitrator’s essential function really remains the exercise of judgment.”
No algorithm can currently replicate the human evaluation of credibility, nuance, fairness, or process integrity — core responsibilities that define the neutral’s role.
At the same time, panelists cautioned against overconfidence in AI outputs. Generative AI’s authoritative tone can give users a false sense of certainty, even when its conclusions are wrong or incomplete. That risk makes human skepticism and review non-negotiable.
Yet manual line-by-line verification of thousands of outputs eliminates much of AI’s value. Practitioners need systems designed for layered validation, where AI tools cross-check one another to flag anomalies or inconsistencies, allowing expert reviewers to focus where judgment matters most.
“It’s one thing to check a brief and make sure the citations are accurate. It’s another thing to check 10,000 documents and make sure the extractions are accurate,” Allen Waxman of DLA Piper said. “If you need to check all of those, you’ve lost the whole benefit of the efficiency of the system.”
What this means for practitioners:
AI should accelerate preparation and analysis, not replace independent judgment. The goal isn’t the automation of decision-making; it’s the redeployment of human expertise to the complex evaluative tasks only people can perform.
Diligence and Disclosure Are the Foundations of Responsible AI Use
AI’s efficiency gains are undeniable, but they raise more complex questions for practitioners: What must be disclosed? How is confidentiality safeguarded? And where does professional responsibility reside when technology assists the work?
Panelists emphasized that while some AI tools may be new, the duties governing their use are not. Obligations of competence, diligence, candor, and confidentiality remain the ethical guardrails guiding responsible adoption. Neutrals must independently evaluate AI-assisted content rather than adopt it wholesale, and counsel must safeguard sensitive material by avoiding uploading it to unsecured or open-source platforms.
“We can’t just…sign off on an AI-drafted award without…ensuring that we’re not delegating decision-making to the AI tool,” Reynolds said.
Panelists also addressed the role of disclosure and consent. While some parties may accept greater AI involvement in exchange for lower costs or faster outcomes, panelists stressed that any such tradeoffs must be knowingly and transparently agreed to, never presumed by neutrals or institutions on the parties’ behalf.
“What’s important is that the parties choose to assume those risks,” Reynolds said. “You would never want to be in a position as an arbitrator that you assume those risks on behalf of the parties.”
Institutions are stepping in to provide clearer guidance where uniform bright-line rules have yet to emerge fully. The AAA-ICDR® Guidance on Arbitrators’ Use of AI Tools, for example, advises arbitrators to:
- Disclose intended AI use that could materially affect arbitration proceedings or legal reasoning
- Obtain party consent where required
- Ensure AI supports — but never replaces — human judgment
The Chartered Institute of Arbitrators has echoed similar expectations through model guidance encouraging early disclosure and withdrawal of AI tools if a party objects.
Panelists drew distinctions between:
- Tribunals and Advocates: Tribunals should disclose planned AI use at the outset; counsel disclosures remain context-dependent
- Productivity Tools and Decisional Tools: Everyday research or drafting support raises different obligations than technologies influencing judgment or award reasoning
What this means for practitioners:
Responsible AI use begins with transparency, informed consent, and professional accountability, not speed alone.
Fit-for-Purpose: Where AI Belongs — and Where It Doesn’t
Perhaps the clearest consensus of the debate: AI is not a one-size-fits-all solution in alternative dispute resolution (ADR).
Panelists described the growing use of AI arbitrator models, such as the AAA’s, as appropriate for documents-only, lower-value disputes in which parties opt in for faster, more cost-effective resolution.
However, these tools are not substitutes for human tribunals in complex, high-stakes matters that involve hearings, credibility determinations, and procedural nuance.
“Do you want a machine making that decision or is there something about the human experience?” said Waxman.
Beyond suitability, panelists stressed the importance of maintaining procedural balance. If one party wields advanced technology while the other lacks comparable access, fairness can be compromised. “A point that I want to bring to attention is that of equality of arms,” Sarah Chojecki of ArbTech said. “What happens in a dispute when one party has access to all the technology, and one doesn’t?”
Institutions play a vital role here by developing transparency measures, disclosure protocols, and safeguards that preserve a level playing field for all participants while continuing to refine standards for emerging technologies.
What this means for practitioners:
AI should expand access to justice — not introduce new disparities. Technology must serve fairness, not tilt it.
The Debate’s Bottom Line: Efficiency Enables What Matters Most
Despite differing viewpoints on the speed of adoption or ideal frameworks, panelists shared a common conclusion:
The true value of AI lies in strengthening — not diminishing — human judgment.
When deployed responsibly, AI frees arbitrators, mediators, and advocates from routine tasks so they can devote more time to:
- Complex case strategy
- Credibility assessments
- Ethical decision-making
- Client counseling
- Human problem-solving
The session revealed a profession that is not resisting innovation but actively shaping it. The question is no longer whether AI belongs in ADR: it’s how the field ensures its use remains grounded in transparency, neutrality, competence, and trust.
Explore more insights from the 2025 Future Dispute Resolution – New York Conference by downloading the full conference report.