Creating the MCP Client
Learn how to build and run an AI-powered client that connects to an MCP tool server and responds to Wikipedia queries in natural language.
In the previous lesson, we built an MCP server that exposes a set of tools for interacting with Wikipedia—tools for searching articles, listing sections, and retrieving section content. In this lesson, we complete the loop by implementing the application’s client-side.
We’ll build an AI-powered client agent capable of invoking these tools in response to user queries. The agent will be constructed using LangChain for tool-aware logic, LangGraph to manage execution flow, and OpenAI as the underlying LLM that interprets input and selects tools. Although there are multiple ways to build MCP clients—including platforms like Claude Desktop, VS Code agents, or the OpenAI Agents SDK—we’ve selected LangGraph because it provides a programmable, event-driven framework for reasoning, tool usage, and state transitions. Its native support for tool routing, message passing, and memory checkpoints aligns well with MCP’s modular interface, making it a practical and scalable solution for client-side orchestration.
The result is an intelligent agent capable of handling queries such as:
“Tell me about the greenhouse effect.”
“List the sections of the Coronavirus article.”
“What does the definition of the greenhouse effect article say?”
By the end of this lesson, you’ll have a fully functional client agent that triggers server-side tools and returns meaningful, structured answers.
Setting up the client environment
We’ll begin by installing the libraries required to build the client. These enable connection to the MCP server, access to its tools, and integration with a language model agent.
Required packages:
mcp
: This is the official Python SDK for Model Context Protocol.langgraph
: This is used to construct a reactive agent capable of tool invocation.langchain_openai
: This adapter package integrates OpenAI models with LangChain.langchain_mcp_adapters
: It bridges LangChain’s agent interface with the MCP client tools.
We can install these libraries using commands:
pip install mcppip install langchainpip install langgraphpip install langchain-openaipip install langchain-mcp-adapters
Note: These installations have already been completed in the course environment. You can focus directly on writing and executing code.
Now that we understand the dependencies, let’s walk through how to create and run the client agent.
Designing the client agent
Now that our server is ready and our environment is configured, we can design the client agent that will interact with the tools we built. The role of the client agent is to:
Accept natural language queries from the user.
Decide which tool (or sequence of tools) to use.
Call the MCP server via the
stdio
interface.Return a structured, helpful response.
We’ll use LangGraph to define the interaction flow as a graph of nodes. Each node represents a reasoning step, such as generating a response or selecting a tool. OpenAI’s LLM powers the agent, and LangChain’s MCP adapters allow tool integration during runtime.
This setup gives us more control and flexibility than traditional agent abstractions, especially when managing state across turns. Now let’s walk through the code step by step to bring this graph-based agent to life.
Defining server connection parameters
The first step in setting up our client is to define how it will connect to the tool server. The MCP server runs as a local subprocess using standard input/output (stdio
) as the transport layer. We define the server’s command and arguments using StdioServerParameters
...