As developers building with generative AI, we have access to a powerful array of frameworks. Tools like LangChain, CrewAI, and others allow us to create specialized AI agents to tackle complex, individual tasks. But this specialization has introduced a significant challenge: too many competing frameworks. This is the core problem that agentic protocols, a new set of open standards for agent communication, are designed to solve.
Become an Agentic AI Expert
Agentic AI represents the next evolution of artificial intelligence, creating autonomous systems that can reason, plan, and execute complex tasks. As businesses seek to automate sophisticated workflows and solve dynamic problems, the demand for experts who can design, build, and manage these intelligent agents is skyrocketing. This “Agentic AI” Skill Path provides a comprehensive journey to becoming an agentic AI expert. We’ll begin with the foundations of large language models, then dive into hands-on development by building multi-agent systems with CrewAI. You’ll advance to mastering architectural design patterns for robust solutions and learn to build scalable applications with the Model Context Protocol (MCP), concluding with high-level system design. By the end of this Skill Path, you’ll possess the end-to-end expertise to architect and deploy sophisticated agentic systems.
Let’s consider a scenario. We are tasked with building an AI-powered travel assistant. This assistant needs to perform a series of distinct operations: check our calendar for available dates, query a live flight API, search a hotel database, and finally, draft a confirmation email. The real engineering hurdle isn’t building each component in isolation; it’s making them communicate reliably and efficiently.
We must write brittle, custom glue code without a shared standard for every point-to-point interaction. This approach introduces several critical engineering problems:
Custom integration overhead: We must build a separate, custom connector for every tool. This means writing unique code to handle the specific authentication, request format, and data structure for the calendar, then repeating the process for the flight API, and once again for the hotel database. This integration is time-consuming and doesn’t scale.
Lack of reusability: Other agents cannot reuse the code written to connect our travel assistant to the airline’s API. We continue to build from scratch rather than on a common foundation.
Maintenance burden: When one of these external services inevitably changes its API, a key part of our application breaks. This creates a fragile system where maintenance becomes a constant cycle of reactive fixes.
Vendor lock-in: By building directly against proprietary APIs, we lock our system into specific implementations. Swapping one flight data provider for another requires a complete rewrite of that part of our application.
But what if there were a universal language designed to solve this?
This is the core promise of agentic protocols: a set of open standards that provide common ground for AI agents to discover, communicate, and collaborate effectively, regardless of how they were built.
Agentic frameworks have always allowed agents to connect to tools and chain commands. These protocols introduce a critical shift from proprietary, AI framework-specific integrations to a universal standard for agentic AI interoperability. The agent we build for one system can seamlessly communicate with tools and other agents across the entire ecosystem.
In this guide, we will explain what this means for us as we explore:
How does the Model Context Protocol (MCP) empower agents to use external tools and data?
How does the Agent2Agent (A2A) Protocol enable different agents to collaborate as a team?
What is the Agent Communication Protocol (ACP) story and its important convergence with A2A?
A practical look at each protocol with working code examples and common use cases.
This guide is for AI and software developers building with agentic frameworks and looking for a clear, standardized solution to the integration and interoperability challenges in modern AI systems. Let’s begin by understanding how we give our agent the ability to interact with the world around it through tools and data sources.
The Model Context Protocol (MCP) is an open-source standard introduced by Anthropic in November 2024, designed to bridge an AI’s reasoning capabilities and these external systems.
The easiest way to think of MCP is as a universal USB-C port for AI. Just as USB-C provides a standardized way to connect peripherals to a computer, MCP allows an AI agent to plug into any external tool or data source, using 
Consider our travel assistant example, which needs to interact with the outside world of APIs, files, and databases. Without a standard, we would need to write custom code to handle request formats, authentication, and response parsing for the calendar API, the flight booking service, and the hotel database. If any of those external services change their API, our integration fails.
MCP solves this by introducing a layer of abstraction. Instead of teaching our agent to handle three different API formats, we only teach one: MCP. Each external tool is exposed through a standardized interface that translates the tool’s specific functions into the common MCP format. This makes our system modular, scalable, and far easier to maintain.
The MCP architecture follows a classic client-server model and is defined by three key participants:
MCP host: It is the primary AI application that a user interacts with, such as our travel assistant.
MCP client: A component within the host handles direct communication with one specific external tool.
MCP server: It is a program that wraps an external tool or data source, exposing its capabilities in the standardized MCP format.
Communication between these components is structured around three core primitives:
Tools: A tool is an executable function that allows an AI agent to perform a specific action in an external system. Think of tools as the verbs in the agent’s vocabulary; they actively do something that can cause a change. A tool could be a function that books a flight, sends an email, or updates a record in a database.
Resources: A resource is a read-only source of data that provides an AI agent with the context to reason and act. These are the “nouns” the agent can reference. A resource might be a local file, the current schema of a database, or the contents of a user’s calendar for a specific day.
Prompts: A prompt is a predefined, reusable template that packages a complex, multi-step task into a single, structured command. It acts as a blueprint that guides an agent using specific tools and resources to achieve a larger goal. For example, a vacation-planning prompt could guide the AI to check a calendar (a resource), search for flights (a tool), and present the options in a specific format.
The best way to see how MCP works is to build one half of the connection: the MCP server. We will create a simple server that exposes a single, practical tool. This demonstrates how we can take any standard function and make it discoverable and usable by an AI agent.
For our scenario, let’s build a CurrencyConverter server. Its only job is to provide a tool for converting US Dollars to Euros.
Here’s the complete Python code for our MCP server:
# currency_converter_server.py# A simple MCP server that exposes a currency conversion tool.from modelcontext.server.fastmcp import FastMCP# A fixed, example conversion rate for simplicity.USD_TO_EUR_RATE = 0.93# 1. Initialize the FastMCP server with a name.mcp = FastMCP("CurrencyConverter")@mcp.tool()def convert_usd_to_eur(amount_usd: float) -> str:"""Converts a given amount in USD to EUR using a fixed rate.Args:amount_usd: The amount in US Dollars to convert.Returns:A string describing the converted amount in Euros."""amount_eur = amount_usd * USD_TO_EUR_RATEreturn f"${amount_usd:.2f} USD is equal to €{amount_eur:.2f} EUR."# 2. Start the server to make the tool available to MCP clients.if __name__ == "__main__":mcp.run(transport="stdio")
This simple server demonstrates the power of MCP: with a single decorator, our Python function is now a standardized, discoverable tool. The framework automatically handles the underlying protocol, using the function’s name, docstring, and type hints to tell other agents what this tool does and how to use it.
So, how is this tool used? An MCP client, which is the AI agent or the application it lives in (like Cursor or a custom-built chatbot), would connect to this server. Once connected, the AI model could decide to use the convert_usd_to_eur tool whenever a user’s request involves currency. The client application would then make a standardized JSON-RPC call to our server, and our simple Python function would execute.
Below is a complete example of a client application built with LangGraph. This code launches our currency_converter_server.py, connects to it as an MCP client, dynamically loads the convert_usd_to_eur tool, and makes it available to a Gemini LLM.
import asynciofrom mcp import ClientSession, StdioServerParametersfrom mcp.client.stdio import stdio_clientfrom langgraph.graph import StateGraph, START, ENDfrom langgraph.graph.message import AnyMessage, add_messagesfrom langgraph.checkpoint.memory import MemorySaverfrom langgraph.prebuilt import tools_condition, ToolNodefrom typing import Annotated, Listfrom typing_extensions import TypedDictfrom langchain_google_genai import ChatGoogleGenerativeAIfrom langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholderfrom langchain_mcp_adapters.tools import load_mcp_tools# MCP server launch configserver_params = StdioServerParameters(command="python",args=["currency_converter_server.py"])# LangGraph state definitionclass State(TypedDict):messages: Annotated[List[AnyMessage], add_messages]async def create_graph(session):# Load tools from MCP servertools = await load_mcp_tools(session)# LLM configurationllm = ChatGoogleGenerativeAI(model="gemini-2.0-flash", temperature=0, google_api_key="{{GOOGLE_GEMINI_API_KEY}}" )llm_with_tools = llm.bind_tools(tools)# Prompt template with user/assistant chat onlyprompt_template = ChatPromptTemplate.from_messages([("system", "You are a helpful assistant that uses tools to convert the currency."),MessagesPlaceholder("messages")])chat_llm = prompt_template | llm_with_tools# Define chat nodedef chat_node(state: State) -> State:state["messages"] = chat_llm.invoke({"messages": state["messages"]})return state# Build LangGraph with tool routinggraph = StateGraph(State)graph.add_node("chat_node", chat_node)graph.add_node("tool_node", ToolNode(tools=tools))graph.add_edge(START, "chat_node")graph.add_conditional_edges("chat_node", tools_condition, {"tools": "tool_node","__end__": END})graph.add_edge("tool_node", "chat_node")return graph.compile(checkpointer=MemorySaver())# Entry pointasync def main():async with stdio_client(server_params) as (read, write):async with ClientSession(read, write) as session:await session.initialize()agent = await create_graph(session)print("MCP agent is ready.")while True:user_input = input("\nYou: ").strip()if user_input.lower() in {"exit", "quit", "q"}:breaktry:response = await agent.ainvoke({"messages": user_input},config={"configurable": {"thread_id": "currency_converter-session"}})print("AI:", response["messages"][-1].content)except Exception as e:print("Error:", e)if __name__ == "__main__":asyncio.run(main())
This example focuses on a single tool for clarity, but MCP’s architecture is also designed to handle more complex interactions using resources and prompts. To explore these advanced primitives, or to learn how to build an MCP client from scratch, take a look at our comprehensive course.
Mastering MCP: Building Advanced Agentic Applications
This course teaches you how to use the Model Context Protocol (MCP) to build real-world AI applications. You’ll explore the evolution of agentic AI, why LLMs need supporting systems, and how MCP works, from its architecture and life cycle to its communication protocols. You’ll build both single- and multi-server setups through hands-on projects like a weather assistant, learning to structure prompts and connect resources for context-aware systems. You’ll also extend the MCP application to integrate external frameworks like LlamaIndex and implement RAG for advanced agent behavior. The course covers observability essentials, including MCP authorization, authentication, logging, and debugging, to prepare your systems for production. It concludes with a capstone project where you’ll design and build a complete “Image Research Assistant,” a multimodal application that combines vision and research capabilities through a fully interactive web interface.
While MCP gives our agent the ability to use tools, it doesn’t solve a different, equally important challenge: what happens when our agent needs to collaborate with another agent? Today’s agentic frameworks like LangChain or CrewAI are excellent for building powerful, multi-step agents, but they often operate in isolation. How do we make an agent built with CrewAI collaborate with one built on a proprietary Google framework?
The Agent2Agent (A2A) Protocol is an open standard, launched by Google and a consortium of over 50 industry partners in April 2025, that enables autonomous AI agents to discover, communicate, and collaborate with each other as peers regardless of their underlying technology.
If MCP is like plugging a keyboard into the computer, A2A is like one computer sending an email to another across the internet. It provides a universal communication protocol specifically for agent-to-agent collaboration, a critical distinction from protocols that connect agents to tools. A2A facilitates a decentralized network where specialized agents, even if built by different teams on different frameworks, can work together to solve complex problems.
If MCP is the USB-C port for tools, A2A is the HTTP for agents. This means that while MCP provides a standardized way for an agent to make a direct, one-to-one connection with a specific tool, A2A provides the networking protocol that allows any agent to discover and communicate with any other agent.
To enable this cross-framework communication, A2A defines a simple but powerful life cycle built on familiar web standards like HTTP and JSON-RPC. This interaction is made possible by a few core components:
Agent Card: An Agent Card is a public JSON document that acts as a discoverable profile for an agent. Think of it as a digital business card. It tells other agents who it is, what it can do (its skills), and where it can be reached.
Tasks and artifacts: A task is a structured object that represents the entire interaction from start to finish, while an artifact is the final, packaged output of a completed task. This standardized format ensures that work is requested and results are returned in a predictable way, whether the result is simple text, a file, or structured JSON.
Any collaboration between two A2A-compliant agents follows four key steps:
Discovery: A host agent (the one initiating the request) finds a remote agent (the one specialized for the task) and learns its capabilities by reading its public Agent Card.
Initiation: The host agent sends a formal task to the remote agent, asking it to perform a specific action.
Execution: The remote agent receives the task and performs its core function, such as querying a database or generating content.
Response: The remote agent packages its result into a structured artifact and sends it back to the host agent as the final output.
Now that we understand the components, let’s see why they matter. The power of A2A lies in enabling collaboration between agents that are completely independent and have no knowledge of each other’s internal logic.
Let’s imagine a real-world e-commerce scenario between two different companies: Shop Sphere, an online retailer, and Global Ship, a logistics partner.
A customer asks the Shop Sphere agent: “Where is my order #12345?” The ShopSphere agent doesn’t know this information internally. Without A2A, a developer would have to build a custom, brittle API integration specifically for Global Ship’s tracking system.
With the A2A protocol, the process is standardized and seamless:
The Shop Sphere agent discovers the public-facing Global Ship tracking agent by fetching its Agent Card. This card tells that the agent has a track_package skill.
The Shop Sphere agent then initiates a formal task, sending the order number to the Global Ship agent’s secure endpoint.
The Global Ship agent, which might be written in a completely different programming language and use a proprietary internal database, executes the request.
Finally, it responds with a standardized artifact containing the package status (e.g., “In transit, expected delivery tomorrow”).
The Shop Sphere agent receives this structured data and can relay it to the customer. The important point here is that this was achieved without any custom integration. Shop Sphere never needed access to Global Ship’s internal databases or proprietary code. If Shop Sphere decides to partner with a new logistics company later on, as long as that company exposes an A2A-compliant agent, the integration will work instantly, with zero code changes. This is the power a universal protocol brings to the ecosystem.
Let’s consider our e-commerce scenario. We will build the key components that allow the Shop Sphere agent to communicate with the Global Ship agent. This example will demonstrate how A2A enables a complete, cross-boundary communication loop.
First, for the Shop Sphere agent to find it, the Global Ship tracking agent must publish its public-facing digital business card. This Agent Card defines its name, endpoint URL, and the specific track_package skill it offers.
# Import the necessary data structures from the a2a-sdk library.from a2a.types import AgentCard, AgentSkill, AgentCapabilities# 1. Define the specific skill the GlobalShip agent offers.track_package_skill = AgentSkill(id='track_package_by_id',name='Track Package',description='Retrieves the real-time shipping status for a given order ID.',examples=['Track my order #12345', 'Where is package #12345?'])# 2. Create the main Agent Card for the GlobalShip Tracking Agent.globalship_agent_card = AgentCard(name='GlobalShip Tracking Agent',description='A specialized agent for tracking package and shipment status for GlobalShip, Inc.',url='https://api.globalship.com/a2a',skills=[track_package_skill],default_input_modes=['text/plain'],default_output_modes=['text/plain'],capabilities=AgentCapabilities(streaming=False),)# This AgentCard would be served as JSON from the agent's public URL, making it discoverable.
When a customer asks for their order status, the Shop Sphere agent (after discovering the Global Ship agent and retrieving its AgentCard) can use an A2AClient to send a structured task. This client code would be part of the Shop Sphere agent’s internal logic.
# Import the necessary A2A client and types, plus httpx for async requests.from a2a.client import A2AClientfrom a2a.types import MessageSendParamsimport httpx# This function would be part of the ShopSphere agent's logic.async def get_shipping_status(order_id: str):# In a real scenario, the globalship_agent_card would be fetched from its URL.async with httpx.AsyncClient() as http_client:# 1. Initialize the client with the GlobalShip agent's card.client = A2AClient(httpx_client=http_client,agent_card=globalship_agent_card)# 2. Construct the message payload with the specific request.message_payload = {'message': {'role': 'user','parts': [{'kind': 'text', 'text': f'Track order #{order_id}'}]}}# 3. Send the message. The A2A client handles the protocol complexities.response = await client.send_message(params=MessageSendParams(**message_payload))# The response will contain the Artefact from the GlobalShip agent.status = response.artefact.parts[0].textprint(f"Status for order #{order_id}: {status}")return status
Finally, this is the server-side logic within the Global Ship agent. When it receives the task, its AgentExecutor processes the request (by querying its internal database) and packages the final result into a standardized artifact to be sent back to the Shop Sphere agent.
# Import the necessary data structures for the agent's response logic.from a2a.server.agent_execution import AgentExecutor, RequestContextfrom a2a.server.events import EventQueue, TaskArtifactUpdateEventfrom a2a.types import Artefact, TextPartclass GlobalShipExecutor(AgentExecutor):"""The execution logic for the GlobalShip Tracking Agent."""async def execute(self, context: RequestContext, event_queue: EventQueue) -> None:# In a real agent, this would parse the order ID from the request# and query a proprietary shipping database.shipping_status = "In transit. Expected delivery: Tomorrow, by 8 PM."# 1. Package the string result into a structured TextPart.result_content = TextPart(text=shipping_status)# 2. Wrap the TextPart in an Artefact to signify the final result.response_artefact = Artefact(parts=[result_content])# 3. Create a TaskArtifactUpdateEvent containing the final Artefact.artefact_event = TaskArtifactUpdateEvent(task_id=context.task_id,artefact=response_artefact)# 4. Enqueue the final event. The A2A server framework handles sending this back.await event_queue.enqueue_event(artefact_event)
This complete loop from discovery via an Agent Card, to a secure request from a partner, to a formal response in an Artifact is the essence of A2A. It allows for scalable, secure, and truly interoperable collaboration between agents from different organizations.
Build AI Agents and Multi-Agent Systems with CrewAI
This course will explore AI agents and teach you how to create multi-agent systems. You’ll explore “What are AI agents?” and examine how they work. You’ll gain hands-on experience using CrewAI tools to build your first multi-agent system step by step, learning to manage agentic workflows for automation. Throughout the course, you’ll delve into AI automation strategies and learn to build agents capable of handling complex workflows. You’ll uncover the CrewAI advantages of integrating powerful tools and large language models (LLMs) to elevate problem-solving capabilities with agents. Then, you’ll master orchestrating multi-agent systems, focusing on efficient management and hierarchical structures while incorporating human input. These skills will enable your AI agents to perform more accurately and adaptively. After completing this CrewAI course, you’ll be equipped to manage agent crews with advanced functionalities such as conditional tasks, robust monitoring systems, and scalable operations.
The challenge of enabling agent-to-agent communication proved so important that multiple major initiatives emerged to solve it at nearly the same time. As Google and its partners were developing A2A, another powerful open standard was taking shape, driven by a similar vision for a more connected and interoperable AI ecosystem.
Shortly before A2A was announced, another major effort to standardize agent communication was introduced. The Agent Communication Protocol (ACP) is an open, vendor-neutral standard developed by IBM Research. Its goal is to provide a simple REST-based, vendor-neutral communication layer for agent-to-agent interaction. The parallel emergence of these two powerful protocols highlighted a clear and urgent industry-wide need for a universal language for AI agents.
The core problem ACP aimed to solve is the same fundamental challenge A2A addressed: framework diversity. A development team might find that a research agent is best built using LangChain for its robust data connectors, while a creative writing agent works better with a framework like CrewAI. How do we make them collaborate without writing custom, brittle integrations for each pair?
ACP was designed to be that universal bridge. Let’s imagine an “Automated Content Strategist” agent tasked with generating a new, data-driven blog post. Using ACP, it could orchestrate a multi-framework team:
First, it would task a research agent, built with LangChain, to gather the latest market statistics and trending topics related to a product.
The research findings would then be passed via ACP to a writing agent, built with CrewAI, to produce the first draft.
Finally, that draft would be sent to an SEO agent, possibly a proprietary agent from a specialized marketing platform like BeeAI, to provide optimization suggestions.
This “best-tool-for-the-job” approach is the ideal for multi-agent systems, and ACP made it possible.
At its core, ACP was also designed around a familiar client-server model, making it easy to implement and understand. The architecture was composed of a few key components:
ACP server: An ACP server is a wrapper that hosts one or more AI agents and exposes their capabilities through a standard RESTful API. Its main purpose was to handle incoming HTTP requests, manage the agent’s execution life cycle, and format the responses.
ACP client: An ACP client is any agent, application, or service that sends requests to an ACP server to invoke an agent’s skills. The client was responsible for discovering other agents and initiating communication.
Agent Manifest: The Agent Manifest is a metadata file that acts as an agent’s discoverable profile. Similar in concept to A2A’s Agent Card, it describes what the agent can do, the inputs it accepts, and how to communicate with it, enabling dynamic, on-the-fly discovery.
A core design philosophy of ACP was simplicity. Unlike protocols that rely on more complex formats like JSON-RPC, ACP was built directly on RESTful principles, using standard HTTP methods (like POST) and conventions. This made it lightweight and easy to integrate into existing systems without requiring specialized SDKs.
The typical interaction life cycle in ACP was simple, mirroring standard API communication:
Wrapping and publishing: First, a developer would take their existing agent logic and wrap it in an ACP server, which publishes an Agent Manifest.
Discovery and interaction: An ACP client (like our orchestrator) could then discover the agent by reading its manifest. To use the agent, the client would simply send a structured HTTP request to the agent’s endpoint.
Processing and response: The ACP server would receive this request, pass it to the underlying agent for processing, and then return the result in a standard HTTP response. The protocol also included support for long-running tasks via asynchronous communication and streaming, making it robust for complex, real-world workflows.
This SDK-optional and REST-based design was important because developers could interact with an ACP-compliant agent using basic tools like curl or Postman, making testing and integration easier.
While ACP presented a powerful vision for interoperability, the agentic ecosystem was evolving rapidly. In a move that signaled a strong commitment to a unified ecosystem, the ACP initiative made a pivotal decision.
In August 2025, it was announced that ACP would officially merge with the A2A protocol under the Linux Foundation. This combines the expertise and assets from both initiatives to build a single, more powerful, and unified standard for how AI agents communicate. This means the powerful, framework-agnostic vision demonstrated by ACP is not lost; it is now being integrated into the broader, industry-wide A2A standard. For developers, this is excellent news, as it signals a clear and unified path forward for building the next generation of collaborative AI systems.
With the ecosystem consolidating around a two-protocol model, developers must understand the distinct roles of MCP and the unified A2A. Let’s summarize the key differences.
While these protocols share the goal of creating a more interoperable AI ecosystem, they are designed to be complementary, solving two different but related challenges.
Here is a quick comparison summarizing the key distinctions:
| Feature | Model Context Protocol (MCP) | Agent2Agent (A2A) Protocol | 
| Primary Focus | Agent-to-tool communication | Agent-to-agent collaboration | 
| Core Analogy | The USB-C port for AI | The HTTP for agents | 
| Interaction Model | Connecting to external tools and data sources | Orchestrating multi-agent workflows | 
| Typical Use Case | Used when an agent needs to query a database, call a specific API (like a weather or flight service), or access local files | Used when one agent needs to delegate a specialized task to another, such as in a complex customer service or automated pipeline scenario | 
| Key Components | Tools, resources, and prompts | Agent Card, tasks, and artifacts | 
| Communication Style | JSON-RPC 2.0 | Built on HTTP, SSE, and JSON-RPC | 
| Developer | Anthropic | Google and 50+ partners | 
We’ve explored how the once-fragmented world of AI agents is rapidly moving toward a future of structured collaboration. We’ve seen that this new ecosystem is built on two complementary layers of communication. If protocols like MCP provide a standard method for an agent to connect to its tools, then A2A provides the universal language for agents to collaborate with their colleagues. The convergence of efforts like ACP into the A2A standard shows a powerful industry-wide commitment to this unified, two-layer approach.
The true power of these standards emerges when they are used together. A mature, sophisticated AI system will leverage both: using MCP internally to interact with its specific tools and data resources, and A2A externally to delegate tasks and collaborate with other independent agents across the network. This marks a shift in how we build AI-powered applications, and the impact of agentic AI on software engineering roles is set to grow. We are moving from developing isolated, monolithic agents to architecting intelligent, collaborative multi-agent systems. The foundation for the next generation of AI has been laid, and now we have the blueprints to start building.