Skip to main content

By Jordi Chaffer 

Governing the Agentic Web 

In a bold move that signals the shape of things to come, Mastercard announced in April of 2025 its intention to partner with Microsoft and other leading AI platforms to scale Agentic Commerce. This is not just another evolution in fintech. It is a sign that a radically different paradigm for the internet is emerging, where autonomous AI agents will act on behalf of individuals, organizations, and even other agents in what is increasingly being termed the “Agentic Web”.

The Agentic Web represents a major shift in the evolution of the internet, moving from a human-cantered information network to an ecosystem populated by autonomous AI agents that perceive, decide, and act independently. This shift is driven by new foundational protocols: Anthropic’s MCP standardizes agent memory and tool use; Google’s A2A enables cross-platform agent communication; and Microsoft’s NLWeb turns websites into agent-readable services. This represents a transition from user engagement to user delegation, where humans increasingly entrust intent, not just tasks, to machines. 

Decentralized Agents and the Limits of Traditional Governance 

In the Agentic Web, AI agents are set to become first-class participants capable of representing interests in digital environments. While the behavioral and legal uncertainties of AI agents pose significant challenges, these risks are amplified by the emerging economic dimensions of the Agentic Web.

At scale, the internet could consist of billions of agents that transact, interact, and make decisions – completely changing the paradigm of how value flows. In a keystone paper on the Agentic Web authored by prominent computer scientists such as Dawn Song of UC Berkeley, researchers have suggested that integration of blockchain technology presents promising opportunities for the Agentic Web’s economic foundation, primarily as a decentralized medium of value exchange between AI agents. Signs of a decentralized agentic economy have already been visible as of late 2024, when two AI agents, Luna Virtuals and Stix, reportedly executed the first fully autonomous transaction on a blockchain. The Virtuals protocol, which generated 43 million dollars in revenue over just two months and supported over 11,000 agents, demonstrates the scalability and potential of the emerging Web3 x AI agents paradigm. The story of Truth Terminal, an AI influencer that amassed wealth, launched a cryptocurrency, and hired humans to spread its message, further demonstrates the social and economic impacts of this emerging paradigm.

While this is an exciting frontier of innovation, there is a growing literature around the dangers of decentralized AI agents (DeAI agents, DeAgents) or crypto AI agents (CAIAI). Indeed, Marino and Juels (2025) argue that giving AI agents access to cryptocurrency and smart contracts creates new vectors of harm by virtue of their autonomy, anonymity, and automaticity. As Hu and Rong argue, DeAgents introduce governance challenges that are not just technical, but structural. Once deployed on blockchains, housed in trusted execution environments (TEEs), and funded by on-chain treasuries, DeAgents may operate indefinitely without the possibility of direct human intervention. This design, intended to remove single points of failure and monopolistic control, simultaneously erodes conventional levers of oversight.

Hu and Rong highlight three interlocking traits that make DeAgents particularly resistant to traditional forms of regulation.

First, their borderless execution allows computation to migrate fluidly across decentralized infrastructure, evading the reach of any single jurisdiction the moment it becomes hostile.

Second, they possess immutability as armor: once their logic is embedded in a smart contract, it is effectively read-only, making it impossible to modify quickly enough to contain an exploit or halt harmful behavior.

Finally, their economic self-sufficiency via integrated wallets and treasuries enables them to continuously purchase compute power, pay bounties, and sustain their operations across multiple networks, even in the face of targeted interventions.

These characteristics of decentralized agents create a paradox: the very features that make them innovative also make them nearly impossible to govern through traditional means. This regulatory resistance underscores the urgent need for new approaches to securing the Agentic Web that can preserve its benefits while mitigating its risks.

Securing an Open Agentic Web 

This points to a deeper tension underpinning the Agentic Web: the choice between open ecosystems and walled gardens. As Rothschild and colleagues at Microsoft Research highlight, if proprietary standards dominate, the Agentic Web risks fracturing into isolated domains, undermining its potential as a shared, interoperable infrastructure. This fragmentation would stifle innovation, limit agent collaboration, and create asymmetrical power structures where a few corporations control access to critical digital spaces. Avoiding this outcome demands collaborative, industry-wide efforts to promote open standards and transparent coordination mechanisms; ensuring that the Agentic Web remains a dynamic, decentralized network where agents (and their human counterparts) can interact freely and securely.

The urgency of such efforts becomes clearer in light of early internet history. As noted in California’s recent report on Frontier AI Policy, early design choices and policy inaction created entrenched vulnerabilities that still affect today’s digital systems. This point is highlighted in their examination of the 1988 Morris worm incident, where a self-replicating program disabled nearly 10% of internet-connected devices. The attack demonstrated the dangers of assuming trust in infrastructure without safeguards. Despite longstanding warnings, governance remained ad hoc and heavily reliant on outdated legal frameworks.

The parallels to today’s Agentic Web are unmistakable. Just as the early internet’s trust assumptions proved dangerously naive, the current trajectory toward billions of autonomous agents operating without clear identity or accountability mechanisms threatens to create an ungovernable digital ecosystem. The solution lies not in restricting agent capabilities, but in establishing verifiable trust foundations.

Know Your Agent

There currently is a lack of consensus in the Identity & Access Management industry on how to identify, authorize, and audit agentic AI systems. In fact, a major theme at this year’s European Identity and Cloud Conference was on defining agentic identity and access controls. This gap has prompted the formation of specialized working groups, such as the OpenID Foundation’s Artificial Intelligence Identity Management Community Group.

As the OpenID Foundation recognizes, AI is disrupting many dimensions of digital transactions and human-digital interfaces, from social interactions and commerce to financial services and business-to-business transactions. However, the silos between AI and identity communities’ risk leading AI platforms to underutilize known identity standards, creating gaps that may not be addressed at optimal pace, and repeating known problems in security, privacy, and interoperability.

The foundation has identified specific gaps in standards around how AI agents assert identity to external servers, how tokens move between multiple AI agents, and how agent discovery and governance should function. These technical challenges underscore a broader truth:

Without verifiable structures for identity, provenance, and accountability, the Agentic Web risks becoming an opaque system in which it is impossible to know who you are interacting with. This is why trust must be embedded in the Agentic Web through enforceable preconditions for agent participation, including identity verification, capability declarations, behavioral logging, and accountability mechanisms. In short, we need a Know Your Agent (KYA) standard.

KYA builds on the familiar concept of Know Your Customer (KYC), which requires financial institutions to verify client identities to prevent fraud, money laundering, and terrorism financing. Originating from anti-money laundering laws in the 1970s and strengthened after 9/11, KYC is now a global regulatory standard enforced by governments and watchdogs like the FATF.

A KYA framework could include the following core components:

  1. Agent Identity: Every agent should have a verifiable digital identity, such as a decentralized identifier (DID).
  1. Declared Capabilities: Agents should publish a verifiable and machine-readable manifest of their capabilities and permissions.
  1. Provenance: KYA would require clear lineage tracking, such as who built the agent, who modified it, and who currently governs it.
  1. Behavioral Logs: To support audits, agents should maintain tamper-proof logs of key actions taken, such as what decisions were made, under what conditions, and using which data sources.
  1. Delegation Models: Agents could delegate tasks to other agents, where KYA would define mechanisms to track and verify such delegation, including scope, time limits, and permissions.
  1. Revocation and Updating: Agents should be subject to recall if violating policies.

KYA offers a policy framework to think about the need for AI agents to be authenticated, authorized, auditable, and aligned with the goals of their human or institutional principals. Early infrastructure is already taking shape, with foundational work by Tobin South, Alan Chan, and Jared James Grogan contributing to the push for standardized AI agent identity. Industry leaders are also developing practical approaches: Okta advocates for logging for all identities (human and non-human), standardized authentication and authorization for agents, least privilege enforcement to limit agent access, and Cross-Application Access (CAA) to replace static credentials with real-time, policy-based controls. Further, there are projects developing trust registries for agents that record decentralized identifiers, link them to owners or developers, and include compliance attestations.

Just as KYC became indispensable to global finance, KYA must become the foundation of trust in the Agentic Web, before billions of agents shape our digital future without accountability. The consequences of inaction will be difficult to reverse. We must therefore create the conditions under which they can be identified, understood, and governed.