Building Single-Server MCP Architecture
Learn how to implement a Python MCP server and a LangGraph client, connecting them to build a complete, tool-using agent.
Having explored the core architecture of MCP, we will now transition from theory to practice by building a complete, functional agentic system. This hands-on approach will demonstrate how MCP’s client-server model comes to life in a real-world application, connecting an intelligent agent to a custom tool.
Scenario: Building an intelligent weather assistant
Let’s imagine we want to create an intelligent weather assistant. While a powerful language model like GPT-4 can discuss meteorology, it cannot provide the current weather for a specific city because its knowledge is not real-time. Our goal is to build an agent that can answer a user’s query, such as “What’s the weather like in Tokyo?”, by accessing a live external data source.
Implementing our agent
To achieve this, we will implement a complete client-server system using the MCP. This system will be powered by live data from OpenWeatherMap, a widely used weather data service that we will access via a secure API key.
Our architecture is intentionally decoupled, separating the agent’s “brain” from its specialized “skill.” This allows each component to be developed and maintained independently. Here’s a detailed look at the two components we will build:
MCP server (the weather provider): This server acts as our dedicated “weather tool.” Its role is singular and focused: to encapsulate all the logic needed to communicate with the OpenWeatherMap API. It will expose a single, discoverable tool named
get_weather
. This tool will accept alocation
string as input, fetch the live weather data, and then process the complex JSON response from the API into a simple, clean dictionary. This abstracts all the complexity of the external API away from our agent.MCP client (the agent brain): This client is the intelligent core of our application. Using the powerful LangChain and LangGraph libraries, it will function as a stateful agent. It will take a natural language query from the user (e.g., “What is the weather like in Tokyo?”), use the LLM to understand the user’s intent and decide that the
get_weather
tool is needed. It will then intelligently extract the “Tokyo” argument, send the tool call request to our server via MCP, and receive the structured weather data in response. Finally, it will pass this data back to the LLM to formulate a human-friendly answer for the user.
With this architectural blueprint in mind, we can now proceed to the implementation, starting with our dedicated weather server.
...