As developers building with generative AI, we have access to an incredible array of powerful frameworks. Tools like LangChain, CrewAI, and Google’s own Agent Development Kits (ADK) allow us to create highly specialized AI agents capable of tackling complex tasks. But this specialization has introduced a new challenge: the “framework jungle.”
When building sophisticated systems, the real hurdle isn’t creating a single agent, but getting multiple agents — each built with different tools — to communicate and collaborate. Imagine a data analysis agent built in LangGraph needs to hand off its findings to a report-writing agent in CrewAI, which in turn needs to save its output using a third agent. Today, this requires writing custom, often brittle, “glue code” for each interaction, a significant engineering effort that is difficult to scale and maintain.
This is the challenge that Google and a consortium of over 50 industry partners, including Salesforce, LangChain, and Atlassian, are attempting to solve. On April 9, 2025, they introduced the new, open Agent2Agent (A2A) protocol, a standard designed to provide a universal language for AI agents, allowing them to work together seamlessly, regardless of how they were built.
In this newsletter, we’ll break down what this means for us as we explore:
What the Agent2Agent (A2A) protocol is and the core problem it solves.
A look at its key technical concepts.
A hands-on walk-through to see different agents collaborate in real time.
Why this new standard could be a game changer for building the next generation of AI applications.
The Agent2Agent (A2A) Protocol is an open standard to establish a common communication layer. It enables AI agents built by different teams, with different tools, and from different organizations to effectively discover, communicate, and collaborate on complex tasks. It doesn’t replace frameworks like LangChain or CrewAI; rather, it provides the “how” of communication, allowing us to focus on the “what,” which is the unique value our agents provide.
To understand the problem A2A solves, let’s consider a practical e-commerce scenario. A customer asks their AI assistant, “My recent order hasn’t arrived. Can you check its status, and if it’s lost, issue a refund and give me a discount coupon?” A single request like this requires coordinating several specialized agents:
An Order Management agent to connect with the e-commerce platform (like Shopify) and retrieve the order details and tracking number.
A Logistics agent uses that tracking number to check the carrier’s API (like FedEx) for the package’s real-time status.
A Payments agent to connect to the payment gateway (like Stripe) and process a refund if the package is confirmed lost.
A Marketing agent to generate a unique discount coupon for a future purchase.
Integrating these diverse agents into a cohesive user experience is a significant engineering hurdle without a common communication protocol. Each connection would be a custom, point-to-point solution, making the system difficult to scale, maintain, and extend. A2A provides a standardized way for these independent systems to interact, turning a complex integration challenge into a manageable workflow.
One way to think about A2A is as the potential HTTP for the world of AI agents. Just as HTTP created a universal standard for web browsers and servers to communicate and exchange information, A2A aims to do the same for the rapidly expanding agent ecosystem, enabling interoperability on a global scale.
The Agent2Agent protocol facilitates collaboration by defining a few simple but powerful concepts. It's not about reinventing the wheel; the A2A protocol is built on existing, well-understood web standards like HTTP, JSON-RPC, and
At a high level, any collaboration between two agents follows four key steps:
Discovery: A “host” agent first finds a specialized “remote” agent and learns its capabilities by reading its public Agent Card.
Initiation: The host agent sends a task to the remote agent, asking it to perform a specific action.
Execution: The remote agent receives the task and performs its core function, such as fetching data or generating content.
Response: The remote agent packages its result into a structured artifact and sends it back to the host agent.
While this four-step flow covers a successful interaction, A2A is engineered with the resilience and security required for real-world applications. The protocol’s design addresses these practical needs from the ground up:
Secure by default: The protocol is designed to support enterprise-grade authentication and authorization, ensuring that agents communicate securely.
Asynchronous-first: It natively handles long-running tasks where an agent might need time to work or require human input. It uses server-sent events (SSE) to stream status updates, so a host agent isn’t left waiting for a blocking response.
Robust life cycle and compatibility: The Task life cycle includes states for real-world outcomes, such as failed, for clear error handling. Furthermore, AgentCards include a version field, allowing agents to manage compatibility as they evolve.
Let’s bring these concepts to life with our Daily News Briefing system.
In this scenario, a primary “Host” agent will orchestrate two specialized agents: a “News Fetcher” agent, whose only skill is retrieving the latest tech headlines, and a “Summary Writer” agent, which excels at converting long text into a concise summary to create a news summary. We’ll show how the A2A structures are defined in code, focusing on the key components.
An Agent Card is a public JSON document that serves as a discoverable profile for an AI agent. Think of it as a digital business card or the equivalent of an OpenAPI/Swagger specification for a traditional API. Its primary purpose is to allow a “Host” agent to learn what a remote agent can do, how to communicate with it, and what to expect in return, all without prior configuration. It contains key information such as the agent’s name and description, the URL where it can be reached, and most importantly, a list of its skills, the specific tasks it can perform.
Here is the complete Python code to define the Agent Cards for our “News Fetcher” and “Summary Writer” agents.
# Import the necessary data structures from the A2A library.from a2a.types import AgentCard, AgentSkill, AgentCapabilities# 1. Define the Agent Card for our News Fetcher Agent.news_fetcher_skill = AgentSkill(id='fetch-tech-news',name='Fetch Tech News Headlines',description='Retrieves the latest technology news headlines from a public source.',examples=['Get me the latest tech news', 'What is happening in tech today?'])news_fetcher_agent_card = AgentCard(name='News Fetcher Agent',description='A specialized agent that fetches technology headlines.',url='http://localhost:10001/',skills=[news_fetcher_skill],default_input_modes=['text/plain'],default_output_modes=['text/plain'],capabilities=AgentCapabilities(streaming=False),)# 2. Define the Agent Card for our Summary Writer Agent.summary_writer_skill = AgentSkill(id='summarize-text',name='Summarize Text',description='Takes a block of text and returns a concise, one-paragraph summary.',examples=['Summarize this article for me.', 'Give me the TL;DR for this.'])summary_writer_agent_card = AgentCard(name='Summary Writer Agent',description='A specialized agent that creates concise summaries of long text.',url='http://localhost:10002/',skills=[summary_writer_skill],default_input_modes=['text/plain'],default_output_modes=['text/plain'],capabilities=AgentCapabilities(streaming=False),)# These AgentCard objects would be served as JSON from their respective URLs,# making them discoverable to any other agent in the ecosystem.
With these cards defined, our host agent can now discover both agents and understand their specific capabilities simply by making an HTTP request to their respective URLs.
Once our “Host” agent has discovered the News Fetcher via its Agent Card, it can initiate a conversation. In A2A, every request is wrapped in a task, representing the entire interaction from start to finish. Communication within a task happens through messages, which hold the content for each turn in the conversation. That content is placed inside parts designed to be flexible and support different data types (TextPart, FilePart, DataPart, etc.).
Here is how our Host agent would construct and send a request to the News Fetcher agent.
# Import the necessary types, a client for making HTTP requests, and a library for unique IDs.from a2a.client import A2AClientfrom a2a.types import MessageSendParams, SendMessageRequestfrom uuid import uuid4import httpx# 1. Initialize the A2AClient. It requires an HTTP client and the agent’s card.# The 'news_fetcher_agent_card' is the AgentCard object we defined earlier. This card would be fetched from the agent’s URL in a real-world scenario.async def send_request_to_news_fetcher():async with httpx.AsyncClient() as httpx_client:client = A2AClient(httpx_client=httpx_client,agent_card=news_fetcher_agent_card)# 2. Construct the message payload as a dictionary.send_message_payload = {'message': {'role': 'user','parts': [{'kind': 'text', 'text': 'Get me the latest tech news'}],'messageId': uuid4().hex,},}# 3. Wrap the payload in MessageSendParams and then a SendMessageRequest.params = MessageSendParams(**send_message_payload)request = SendMessageRequest(id=str(uuid4()),params=params)# 4. The client sends the complete request.response = await client.send_message(request)print("Received response:", response)
Once the News Fetcher agent receives the Task and completes its work, it needs a way to return the result. The A2A protocol handles this using an artefact. An artefact is the final, packaged output of a completed task. Like Messages, Artefacts can contain various Parts, allowing agents to return rich data, including simple text, files, images, and structured JSON.
Here is how our News Fetcher Executor would package the headlines into an artefact and enqueue it for sending back to our Host agent.
# Import the necessary data structures for our agent’s response logic.from a2a.server.agent_execution import AgentExecutor, RequestContextfrom a2a.server.events import EventQueue, TaskArtifactUpdateEventfrom a2a.types import Artefact, TextPartclass NewsFetcherExecutor(AgentExecutor):"""The execution logic for our News Fetcher Agent."""async def execute(self, context: RequestContext, event_queue: EventQueue) -> None:# In a real agent, this would make an API call.headlines = "1. AI Agents can now talk to each other via A2A. \n2. Google launches open standard for agent collaboration."# 1. It packages the result into a TextPart.result_content = TextPart(text=headlines)# 2. The part is wrapped in an Artefact to signify the final result.response_artefact = Artefact(parts=[result_content])# 3. An event containing the Artefact is created.artefact_event = TaskArtifactUpdateEvent(task_id=context.task_id,artefact=response_artefact)# 4. The final event is placed on the event queue. The A2A server# handles sending this back to the client that made the request.await event_queue.enqueue_event(artefact_event)
A new protocol can sometimes feel like one more thing to learn, but A2A goes against that wisdom. It’s designed to reduce complexity, not add to it. Creating a common communication standard addresses a key challenge in modern AI development: the lack of interoperability between agents built with different, incompatible frameworks. This opens up new possibilities for how we build AI-powered applications.
For us, the benefits are tangible and can be broken down into three key areas:
Build with the best, not just one: We can break free from vendor or framework lock-in. A2A allows us to build powerful, composite systems by selecting the best agent for each job, regardless of its underlying technology. In our example, we could have a News Fetcher built-in LangGraph for its robust tool use and a Summary Writer from a specialized third-party provider known for its summarization quality, and collaborate seamlessly.
Focus on logic, not plumbing: We can spend less time writing custom, brittle “glue code” to connect disparate systems and more time focusing on what matters: our agents’ core logic and value. A2A standardizes the “how” of communication (discovery, task management, and data exchange) so we can dedicate our efforts to the “what” of our applications.
Tap into a universe of agents: An open standard is the foundation for a thriving ecosystem. As A2A adoption grows, we can expect a marketplace of specialized agents to emerge. Imagine being able to plug a highly advanced, third-party financial analysis agent or a medical terminology agent directly into our applications with minimal integration effort. This will dramatically accelerate development and foster innovation across the entire industry.
With Google’s recent announcement, some developers view the Agent2Agent protocol as a direct response to Anthropic’s Model Context Protocol (MCP). However, they aren't competitors; they're designed to solve two different, but related, challenges. MCP standardizes how an agent connects to and uses well-defined tools and resources, like calling a specific API or querying a database. In contrast, A2A standardizes how autonomous agents collaborate as peers to achieve a broader goal.
The easiest way to think about the distinction is: MCP is for an agent talking to a tool, while A2A is for an agent talking to another agent. A tool is a primitive with a well-defined, structured function, like calling a weather API, using a calculator, or querying a database. The interaction is predictable and transactional. Conversely, an agent is a more autonomous peer that can reason, plan, and engage in complex, stateful conversations to achieve a goal. A mature agentic application will use both: it will use MCP internally to interact with its specific tools and resources, and A2A externally to collaborate with other independent agents to solve larger problems.
While A2A creates a powerful foundation, it’s important to approach it with a clear understanding of its design and the current state of its ecosystem. As with any emerging standard, developers should consider a few key points:
Architectural scalability: By default, A2A interactions rely on direct, point-to-point HTTP connections between agents. While this is simple and effective for many use cases, at a massive scale (with hundreds or thousands of agents), this can create a complex web of connections. For such large-scale systems, developers may need to complement A2A with architectural patterns like a centralized message bus or an event mesh to manage communication more efficiently.
Security is a developer’s responsibility: The protocol is designed to be “secure by default” by supporting enterprise-grade authentication, but does not enforce a specific mechanism. The developer implementing the agent is responsible for securing agent endpoints, protecting sensitive Agent Cards, and securely managing credentials.
Tooling is still emerging: A2A is a new protocol, and the ecosystem around it is still growing. Advanced, integrated tools for debugging and observing complex workflows that span multiple agents and protocols are not yet mature. Early adopters should be prepared for the current state of tooling.
The launch of the Agent2Agent protocol signals a pivotal shift in how we approach building with AI. We are moving beyond the era of the single, monolithic model that tries to do everything, and into an era of collaborative AI. The future of complex problem-solving lies not in one super-intelligent agent but in an ecosystem of specialized agents, each contributing unique expertise.
This is where a standard like A2A becomes the critical enabling infrastructure. If protocols like MCP give agents tools, A2A gives them their colleagues. It provides the common ground for these autonomous systems to collaborate, negotiate, and build upon each other’s work. This journey is just beginning. As A2A matures and its ecosystem of partners grows, we can expect to see an explosion of new capabilities and a marketplace of interoperable agents. This will accelerate development and fundamentally change what’s possible with AI.
As developers, this is our opportunity to move from building isolated tools to being architects of intelligent, collaborative systems. The foundation has been laid. Now — it’s our turn to build on it.
The Agent2Agent protocol provides the communication layer, but the real power of the ecosystem comes from the specialized agents themselves. Mastering the frameworks to build these intelligent agents is the first step toward creating the next generation of collaborative AI applications.
To dive deep into building the agents that will power this new ecosystem, we recommend exploring these courses: