We trust human decision-makers every day, even when we can’t see how they reached their conclusions. So why does transparency suddenly become a concern when AI enters the picture?
That question sits at the center of a recent Wall Street Journal article, “AI Can Make Decisions Better Than People Do. So Why Don’t We Trust It?”, featuring American Arbitration Association® (AAA®) President and CEO Bridget McCormack. The piece examines why skepticism toward AI persists across industries, including law, even as thoughtfully designed systems can offer levels of transparency, auditability, and accountability that human decision-making rarely provides.
In the article, McCormack draws a compelling comparison between traditional human decision-making and bespoke, governed AI systems. She notes that judges, arbitrators, and other legal decision-makers routinely make high-stakes decisions, yet the reasoning behind those decisions is not always visible. By contrast, AI systems can be built to show their work — to be auditable, explainable, and transparent in ways that humans simply cannot be.
That principle is central to the AAA’s AI Arbitrator, an AI-led decision-making system developed for a specific category of arbitration cases. Designed for documents-only, two-party construction disputes, the system evaluates the record, applies governing standards, and produces a proposed reasoned decision that is transparent and explainable. A human arbitrator remains part of the process to review, verify, and finalize all outputs from the AI Arbitrator, including the award, and outcomes are monitored against those reached by experienced human arbitrators.
As highlighted in the Wall Street Journal, the AI Arbitrator also reflects a broader trend toward accountability in algorithmic decision-making. In fields such as insurance and finance, AI systems are held to standards similar to those applied to human decision-makers, including scrutiny for bias and consistency. Over time, that oversight has driven meaningful improvements, particularly in explainability and data-driven oversight.
Despite these advances, trust remains the central challenge. Public concern about AI is widespread, often shaped by high-profile failures in unrelated applications, leading many to conflate specialized legal tools with general-purpose consumer technology. McCormack addresses this distinction directly: the AI Arbitrator is not a general-purpose system, nor an experimental tool deployed without safeguards. It is purpose-built for a defined use case, operates within clear parameters, is subject to ongoing evaluation, and is used only with the agreement of both parties at the outset of a dispute. As McCormack observes, we readily accept the opacity of human decision-making despite decades of research documenting its limitations. The question now is whether we are willing to engage thoughtfully with tools that invite scrutiny rather than resist it.
The AAA has long played a leadership role in shaping the future of dispute resolution, and the introduction of the AI Arbitrator continues that tradition. Grounded in human legal reasoning, its development reflects a broader commitment to advancing AI in arbitration in ways that serve parties, counsel, and the integrity of the process.
For those interested in the broader conversation about trust, technology, and decision-making, the full Wall Street Journal article is available here.
To learn more about the AI Arbitrator and the future of AI in arbitration, visit adr.org/ai-arbitrator.