The Missing Layer in AI-Driven Commerce: A Trust Framework for Agentic Commerce Disputes

AI agents are negotiating agreements, executing services, and moving money, often without direct human involvement. But the legal and trust infrastructure that underpins commerce was not designed for non-human actors.

Those systems still assume human participation. They depend on our ability to verify who is participating, understand intent, and hold parties accountable when something goes wrong. AI agents do not fit neatly into any of these assumptions, and, as their capabilities expand, the disconnect between technological possibility and institutional readiness is becoming harder to ignore.

A new paper, authored by Bridget McCormack, president and CEO of the American Arbitration Association®, and David Fischer, CEO of Integra Ledger, titled “Identity, Trust, and the Legal Foundations of Agentic Commerce,” explores these challenges and outlines a framework for addressing them.

A Growing Gap in Digital Trust  

Much of the current conversation around AI-driven transactions has focused on enabling agents to move money. Commerce, however, is not just about payment. It is about relationships, including negotiation, agreement, performance, and, when necessary, resolution.

The systems needed to support those relationships in an agentic space are still underdeveloped. As a result, questions around identity, authority, and enforceability are surfacing, particularly in smart contract dispute scenarios where transactions execute seamlessly but accountability, intent, and recourse remain unclear.

Without a way to connect autonomous actions back to real-world identity and legal systems, trust becomes difficult to establish and even harder to sustain.

A Framework, Not a Final Answer 

Rather than offering a single solution, the paper introduces a different way of thinking about the problem: trust must be built in layers.

This approach proposes that no single mechanism, whether technical or legal, will be sufficient on its own. Instead, trust emerges from the combination of multiple elements working together, including identity, authority, agreement, and accountability.

At a high level, the framework focuses on four essential questions to address agentic commerce disputes:  

  • Who is ultimately behind an action?  
  • On whose behalf are they acting?  
  • What was agreed to, and under what terms?  
  • What authority was granted, and where are its limits?  

Answering these questions consistently and verifiably is what makes meaningful trust and enforceable commerce possible. 

Why This Matters Now 

This is not the first time technology has outpaced the systems that govern it. New ways of interacting and transacting have always required legal and institutional frameworks to catch up.

What makes this moment different is the nature of the actors themselves. AI agents are not just tools. They are participants in economic activity, operating with increasing autonomy and at a scale that challenges traditional assumptions about identity, intent, and responsibility. Without a framework to anchor their actions in real-world accountability, even the most efficient systems risk becoming unreliable.

Agent-driven commerce is already expanding. The challenge is whether the surrounding infrastructure can keep pace.

Download the full report to explore this challenge in depth and gain a clear framework for restoring trust, accountability, and enforceability in an agent-driven economy. 

Download

April 15, 2026

Discover more

The Missing Layer in AI-Driven Commerce: A Trust Framework for Agentic Commerce Disputes

Trust Is the Product: What’s Next for AI Governance

AI Arbitrator: Guardrails and Oversight