Adding Prompt Feature to the MCP Application
Learn how to expose and invoke LLM prompts using MCP, enabling dynamic and instruction-driven behavior in the client agent.
We previously built an MCP server with callable tools and connected it to a LangGraph-based client. While tools are well-suited for direct actions (e.g., “fetch section content”), real-world assistants often require more flexible behavior, such as reasoning about relevance or summarizing a list. This is where prompts come into play.
MCP prompts allow developers to expose custom LLM instructions as invocable interfaces. Unlike tools, prompts are not executed on the server. Instead, they return text templates that shape the model’s behavior during inference. In this lesson, we’ll implement a prompt that helps users identify the most important sections of a Wikipedia article and explore how it integrates into the server–client workflow.
What are the prompts in MCP?
In MCP, prompts are predefined text templates that guide the behavior of a language model. Whereas tools are invoked and executed server-side to produce structured outputs, prompts are passive: they return a text string when called. This string typically contains a carefully crafted instruction injected into the LLM’s context.
Prompts are particularly useful when:
The task requires summarization, prioritization, or natural language reasoning.
The assistant should express user intent declaratively rather than procedurally.
MCP prompts are:
Defined and hosted on the server, but not executed there.
Discovered and invoked by the client, just like tools.
Parameterized so users can pass in structured inputs like
topic: str
.Deterministic in return value, always returning a text block (not JSON or function output).
In this part of the lesson, we’ll define a prompt that takes a Wikipedia topic as input and returns a prompt string instructing the ...