Skip to content
AI News10 min read0 views

Google and Anthropic Jointly Propose A2A Protocol: The HTTP of AI Agents

A new Agent-to-Agent (A2A) communication protocol aims to create interoperability standards for AI agents across platforms, potentially becoming the foundational infrastructure layer for multi-agent systems.

A Universal Language for AI Agents

In a move that could define the next decade of artificial intelligence infrastructure, Google DeepMind and Anthropic have jointly published a specification for the Agent-to-Agent Protocol (A2A), an open standard designed to let AI agents from different vendors discover, authenticate, and communicate with each other seamlessly. The protocol, announced at a joint press event on March 15, 2026, has already received endorsements from Microsoft, Salesforce, SAP, and the Linux Foundation.

The comparison to HTTP is deliberate and instructive. Just as the Hypertext Transfer Protocol created a universal standard for web servers and browsers to exchange information regardless of vendor, A2A aims to create a universal standard for AI agents to exchange tasks, context, and results regardless of which model or platform powers them.

Why Agent Interoperability Matters Now

The AI agent ecosystem in early 2026 is deeply fragmented. A company might use Salesforce's Agentforce for customer relationship management, Microsoft's Copilot agents for productivity, custom Claude-based agents for internal research, and Google Vertex AI agents for data analysis. These agents operate in isolated silos. They cannot delegate tasks to each other, share context, or coordinate workflows without expensive custom integration.

This fragmentation mirrors the early days of the internet, when proprietary networks like CompuServe, Prodigy, and AOL each had their own protocols and could not communicate with each other. The A2A protocol directly addresses this by defining three core layers:

Discovery Layer

Agents publish a standardized capability manifest, a machine-readable document (similar to OpenAPI specifications for REST APIs) describing what the agent can do, what inputs it requires, what outputs it produces, and what authentication methods it supports. A centralized or federated registry (the specification supports both models) allows agents to discover each other's capabilities.

Communication Layer

A2A defines a message format built on top of existing transport protocols (HTTP/2, gRPC, and WebSockets are supported in the initial specification). Messages follow a structured schema that includes task descriptions, context objects, constraint specifications, and result formats. Critically, the protocol includes a "context handoff" mechanism that allows one agent to pass relevant conversation history or task state to another agent without exposing the full internal state.

Trust Layer

Perhaps the most sophisticated component, the trust layer implements a capability-based security model. Agents carry cryptographically signed credentials that specify exactly what operations they are authorized to perform. Human principals can define delegation chains, allowing Agent A to invoke Agent B only for specific task types and with specific data access permissions.

Technical Architecture and Design Decisions

The A2A specification, published as a 147-page document on GitHub, makes several notable technical choices.

Asynchronous by default. Unlike traditional request-response APIs, A2A treats agent communication as inherently asynchronous. An agent sends a task request and receives a task ID. The receiving agent processes the task and posts results to a callback URL or publishes them to a shared event stream. This design acknowledges that agent tasks may take seconds, minutes, or even hours to complete.

Model-agnostic. The protocol explicitly does not assume any particular AI model or architecture. An A2A-compliant agent could be powered by GPT-4o, Claude, Gemini, Llama, or even a traditional rule-based system. The protocol only cares about inputs, outputs, and capabilities, not internal implementation.

See AI Voice Agents Handle Real Calls

Book a free demo or calculate how much you can save with AI voice automation.

Human-in-the-loop hooks. Every task in A2A includes an optional "approval requirement" field. Organizations can configure their agents to require human approval before executing high-stakes actions (financial transactions above a threshold, data deletions, external communications) while allowing routine tasks to proceed autonomously.

Observability built in. Every A2A message includes a trace ID compatible with OpenTelemetry standards, enabling end-to-end monitoring of multi-agent workflows across organizational boundaries.

Industry Backing and the Standards Process

Google's VP of Cloud AI, Andrew Moore, and Anthropic's Head of Product, Mike Krieger, presented the protocol jointly, an unusual display of cooperation between competitors. Both companies have committed to making their respective agent platforms (Google Vertex AI Agents and Claude's tool-use API) A2A-compliant by Q3 2026.

Microsoft's response was swift and positive. Satya Nadella posted on LinkedIn that Microsoft would adopt A2A as a "first-class protocol" in Copilot Studio and Azure AI Agent Service, noting that "the age of proprietary agent silos must end for the industry to scale."

The Linux Foundation announced it would host the A2A specification under its AI & Data Foundation, providing neutral governance. An initial technical steering committee includes representatives from Google, Anthropic, Microsoft, IBM, Salesforce, SAP, and several startups including LangChain and CrewAI.

Not everyone is on board, however. OpenAI was notably absent from the announcement. Sources familiar with OpenAI's strategy suggest the company prefers its own Assistants API and function-calling protocol as the de facto agent communication standard, viewing A2A as a competitive threat to its platform ambitions.

Comparisons and Criticisms

Several industry observers have drawn comparisons to Anthropic's Model Context Protocol (MCP), released in late 2024, which standardized how AI models connect to external tools and data sources. A2A is explicitly designed to complement MCP rather than replace it. Where MCP defines how an agent talks to tools, A2A defines how agents talk to each other.

Critics have raised concerns about complexity. Dr. Sarah Chen, an AI systems researcher at Carnegie Mellon, noted that the specification "tries to solve every possible interoperability problem at once" and warned that overly ambitious standards often fail to achieve adoption. She pointed to the history of web services standards like SOAP and WS-* as cautionary tales of over-engineering.

Others worry about the security implications of making it easy for agents to invoke other agents across organizational boundaries. While the trust layer addresses authentication and authorization, the potential attack surface of interconnected agent networks is enormous and largely unexplored.

What This Means for Enterprises

For organizations deploying AI agents, the A2A protocol promises to solve one of the most painful practical problems: vendor lock-in and integration cost. Today, connecting a Salesforce agent to a custom internal agent requires building bespoke middleware. With A2A adoption, this becomes a configuration task rather than a development project.

The enterprise middleware market, currently estimated at $12 billion annually, could see significant disruption. Integration platform companies like MuleSoft, Boomi, and Workato have already announced plans to add A2A support, positioning themselves as "agent orchestration platforms" rather than traditional API gateways.

For startups building AI agents, A2A lowers the barrier to participation in enterprise workflows. A small company building a specialized AI agent for contract analysis, for example, could make that agent discoverable and invocable by any A2A-compliant platform without building separate integrations for each potential customer's tech stack.

The Road Ahead

The A2A specification is currently at version 0.9 (draft), with a 1.0 release planned for June 2026 after a public comment period. Reference implementations in Python, TypeScript, Java, and Go are expected by May 2026.

If A2A achieves the adoption its backers envision, it could become the foundational infrastructure layer for the emerging "agentic web," a network of interconnected AI agents that collaborate across organizational boundaries to accomplish complex tasks. Whether it achieves HTTP-level ubiquity or becomes another well-intentioned standard that fragments the ecosystem further remains to be seen.

Sources

  • Google DeepMind Blog, "Introducing the Agent-to-Agent Protocol: An Open Standard for AI Interoperability," March 2026
  • Anthropic Research, "A2A: Building the Interoperability Layer for Agentic AI," March 2026
  • VentureBeat, "Google and Anthropic team up on A2A, a protocol that could become the HTTP of AI agents," March 2026
  • The Verge, "The biggest AI companies want their agents to talk to each other," March 2026
  • Wired, "Can a New Protocol Prevent AI Agent Silos? Google and Anthropic Think So," March 2026
Share this article
C

CallSphere Team

Expert insights on AI voice agents and customer communication automation.

Try CallSphere AI Voice Agents

See how AI voice agents work for your industry. Live demo available -- no signup required.