...

/

Introduction to MCP and its Architecture

Introduction to MCP and its Architecture

Learn about the client-server architecture, communication model, and key primitives of the MCP.

Our AI assistant needs to manage our daily tasks. It should be able to check our calendar for free slots, order groceries from our favorite online store, and then send a confirmation message to a friend. Sounds simple, right? But behind the scenes, without a common language, this seemingly straightforward request becomes a complex web of custom integrations, each speaking a different dialect to our calendar app, the grocery service API, and our messaging app.

This is the core problem that MCP is designed to solve. It acts as a universal adapter, or a common language, for AI integrations. Instead of building multiple custom connectors for our calendar, grocery, and messaging apps, MCP enables our AI assistant to seamlessly interact with each service through a standardized interface. This approach dramatically reduces complexity, eliminates redundant work, and creates a reusable, interoperable ecosystem of AI tools and services.

What is Model Context Protocol (MCP)?

At its core, the Model Context Protocol (MCP) operates on a principle that has been the backbone of the internet for decades: the client-server architecture. This is the same fundamental model we use every day when we check our email or browse a website.

In a traditional client-server setup:

  • client (like our web browser or email application) requests information or a service.

  • server (a powerful computer that stores the website’s files or our emails) processes that request and sends back a response.

Press + to interact
A classic client ↔ server request/response flow
A classic client ↔ server request/response flow

Following this same proven model, the MCP standardizes how AI agents and language models communicate with the outside world. Before MCP, connecting an AI to a new tool or data source required a brittle, one-off integration. MCP introduces a common language, much like how HTTP is a common language for web browsers and servers.

In the MCP ecosystem:

  • The MCP host is the application where the AI model runs (e.g., a chatbot interface, AI agents, IDEs).

  • The MCP client resides within the host and is responsible for making requests to external tools and data sources. It acts on behalf of the AI model and maintains 1:1 connections with the server while residing inside the host.

  • The MCP server exposes an external tool or data source (like a database, API, or local file system). It serves information and capabilities to the client in a standardized format that the AI can understand.

Press + to interact

By adopting this client-server architecture, MCP brings several key advantages to AI development:

  • Standardization and scalability: Developers no longer need to build unique connectors for every tool. A tool with an MCP server can seamlessly connect with any AI application that functions as an MCP client.

  • Modularity and flexibility: AI models can easily swap out or add new tools without significant re-engineering. This modularity allows for more powerful and adaptable AI systems.

  • Security: The protocol establishes clear boundaries and rules for data exchange, enhancing the security of the interactions between AI and external resources.

In essence, MCP takes the robust and scalable client-server concept and tailors it to the specific needs of AI, creating a standardized plug-and-play environment for connecting models to the vast world of external data and tools.

The architectural advantage of MCP

While the benefits of standardization and modularity are clear, it raises a crucial architectural question: if an agent can be programmed to call an API directly, why introduce the intermediary of an MCP server?

Without MCP, our AI assistant’s code would need to include specific logic for the calendar’s API, the grocery service’s API, and the messaging app’s API. The agent itself would become a complex mix of different API clients, each with unique authentication methods, data formats, and error handling. This approach is brittle; if the grocery API changes its endpoint, the entire agent application must be updated. Furthermore, the code we write to connect to the calendar is not easily reusable by any other AI application.

MCP solves this by introducing a powerful layer of abstraction. Instead of teaching the agent how to speak to every individual API, we only need to teach it how to speak one language: MCP. The server handles the messy, specific details of talking to the underlying service. It then exposes that capability as a standardized MCP tool. This creates a clean separation of concerns, making the entire system more robust, scalable, and secure. Now, let’s explore the building blocks of this new language.

Press + to interact

While agentic AI can accomplish tasks without MCP, incorporating MCP streamlines the process. This makes it easier, more scalable, and less prone to errors through standardization.

How MCP actually works

At the heart of the MCP, the following are three fundamental components that inform how an AI agent interacts with the outside world:

  • Tools: Tools are the action-oriented functions the agent can execute. Think of them as verbs; they actively do something, like creating a calendar entry, sending an email, or updating a database record. This is conceptually similar to a POST request in a traditional API, as it causes a change or performs an action.

  • Resources: Resources, on the other hand, are read-only sources of information, essentially the nouns in the conversation. They allow the agent to retrieve data for context, such as getting a list of files in a folder or reading today’s meeting schedule. This is analogous to a GET request, where the goal is simply to fetch information without altering it.

  • Prompts: Prompts are pre-defined recipes for interaction that reside on the server. They provide an optimized and reusable way to perform common tasks, saving users from the overhead of complex prompt engineering.

Press + to interact
Tools, resources, prompts
Tools, resources, prompts

Defining a tool

MCP provides software development kits (SDKs) in various languages that make defining these primitives incredibly simple. When creating a tool on an MCP Server, we typically just write a standard function and then use a special decorator to register it with the protocol.

A decorator is like a label we attach to our function that tells the MCP server, “This function is a tool that should be made available to clients.” The SDK handles all the underlying work of announcing this tool and processing calls to it.

We will be using MCP’s Python SDK throughout the course.

Here is an example of how we might define a simple subtract tool using the Python MCP SDK:

@mcp.tool()
def subtract(a: int, b: int) -> int:
"""
Subtracts the second number from the first.
Args:
a: The number to subtract from.
b: The number to subtract.
Returns:
The difference between the two numbers.
"""
print(f"Executing 'subtract' tool with {a} and {b}")
return a - b
Defining a simple tool in MCP

In this code, the @mcp.tool() decorator automatically makes the subtract function discoverable and callable by any connected MCP client. The function’s type hints (a: int) and docstring are also used by the protocol to tell the client what parameters the tool expects, and what it does.

Defining a resource

Similar to tools, resources are defined on the MCP Server using a decorator. We simply define a function that returns the data, and the @mcp.resource() decorator makes it available. This example shows a resource, which is a static endpoint that always returns the same piece of information when called.

@mcp.resource(
"system://status",
mime_type="application/json"
)
def get_system_status():
"""
Returns the current status and version of the server.
"""
return {
"status": "online",
"version": "1.2.0",
"server_time": datetime.datetime.now().isoformat()
}
Defining a resource in MCP

Here, the @mcp.resource() decorator registers the get_system_status function. When a client requests the system://status resource, the server will execute this function and return the JSON object containing the system’s current status and version.

Defining a prompt

Like tools and resources, prompts are defined using a decorator on the MCP Server. The function signature indicates the required inputs for the prompt and specifies that it will return a list of messages.

This example shows a prompt designed to summarize a given piece of text:

@mcp.prompt(
name="summarize",
description="Summarizes a long piece of text into a few key bullet points."
)
def summarize_text(
text_to_summarize: str
) -> list[base.Message]:
"""Generates a prompt to summarize the provided text."""
# This function returns a list of messages for the LLM.
Defining a prompt in MCP

This code registers a server-side prompt named summarize that generates a list of messages to guide an AI in summarizing text.

The MCP provides a crucial and elegant solution to the growing complexity of building agentic AI. By establishing a standardized client-server architecture, MCP eliminates the need for brittle, one-off integrations and replaces them with a scalable, interoperable ecosystem. Through its three core primitives, action-oriented tools, read-only resources, and reusable prompts, developers can create powerful, modular, and easy-to-maintain AI applications. Ultimately, MCP is the common language that allows a new generation of sophisticated AI agents to reliably communicate with and orchestrate the vast world of digital tools and data.