Search⌘ K
AI Features

Adding Prompts in Single-Server MCP

Explore how to implement user-callable prompts within a single-server MCP setup to enable complex, multi-step reasoning workflows. Learn to define prompts that guide the language model in orchestrating existing tools, update clients to discover and invoke prompts, and improve the agent's intelligence with user-friendly commands.

The agent we built is now proficient at using tools to perform specific actions like fetching weather. However, what if we want to guide the agent through a more complex, multi-step reasoning process, such as comparing the weather in two different cities? This is the role of MCP Prompts. Instead of executing code on the server, a prompt provides a pre-defined conversational recipe that the client sends to the LLM to orchestrate a more sophisticated interaction. In this lesson, we’ll enhance our single-server application by implementing a prompt, adding a new layer of user-guided intelligence to our weather agent.

Introduction to prompts

While tools are excellent for executing discrete, agent-driven actions, sometimes we need to guide the LLM through a more complex reasoning process. This is the purpose of an MCP prompt. An MCP prompt is a server-defined, user-callable “recipe” for the LLM. It’s a structured template created by the developer that encapsulates a complex set of instructions. When a user invokes a prompt, the client simply fetches this pre-written template and sends it to the LLM. This empowers a regular user to trigger a sophisticated, multi-step reasoning process with a simple command, without needing to be a prompt engineering expert themselves.

Tools vs. prompts: A critical distinction

The most important difference between tools and prompts lies in who initiates them. Understanding this is key to designing effective agentic systems.

  • Tools: These are designed for the agent to call autonomously. The LLM decides when to use a tool like get_weather based on its reasoning about the conversational context to achieve a goal.

  • Prompts: These are designed for the user to initiate directly. The user selects a prompt like compare_weather from a list of available options (e.g., a slash command), provides the necessary arguments, and the client then uses the prompt’s template to guide the LLM.

Scenario: Expanding our weather agent’s capabilities

Our weather agent currently excels at its single, defined task: retrieving the forecast for one location. But consider a more nuanced user request: “How does the weather in London compare to Paris today?” While our agent could technically call its get_weather tool twice, the responsibility of orchestrating this two-step process and then synthesizing the results into a clear comparison falls entirely on the user’s ability to craft a perfect, detailed prompt. The results could be verbose and inconsistent. To solve this, we will empower our agent by ...