MCP Prompts: Dynamic Instruction Templates for AI Agents
Master MCP prompts — server-defined instruction templates that guide AI agent behavior with dynamic arguments, context-aware instructions, and reusable prompt patterns across multiple agents.
The Third Pillar of MCP
MCP defines three core primitives: tools (actions), resources (data), and prompts (instructions). While tools and resources get most of the attention, prompts are the mechanism that lets MCP servers ship reusable, context-aware instruction templates alongside their tools.
An MCP prompt is a server-defined template that generates one or more messages for the AI agent's conversation. Think of prompts as expert-authored instructions that know how to use the server's tools effectively. When a server exposes a complex tool like a database query engine, it can also expose prompts that guide the agent through common workflows — "analyze this table," "debug this query," or "generate a migration."
Defining Prompts in Python
In FastMCP, prompts are defined with the @mcp_server.prompt() decorator. Each prompt returns a string or a list of messages:
from mcp.server.fastmcp import FastMCP
mcp_server = FastMCP(name="DatabaseAssistant")
@mcp_server.prompt()
async def analyze_table(table_name: str) -> str:
"""Generate instructions for analyzing a database table.
Args:
table_name: The name of the table to analyze.
"""
return f"""You are a database analyst. Analyze the table '{table_name}'
by following these steps:
1. First, call list_tables to confirm the table exists and review its schema.
2. Call query_db with "SELECT COUNT(*) as total FROM {table_name}" to get the row count.
3. For each column, run a query to find null counts, distinct values, and
min/max for numeric columns.
4. Identify potential data quality issues such as:
- Columns with high null rates (over 50 percent)
- Low cardinality columns that might benefit from an enum
- Numeric columns with suspicious outliers
5. Summarize your findings in a structured report."""
When an agent requests this prompt with table_name="orders", it receives a fully formed instruction set that references the server's own tools by name. The agent does not need to figure out the workflow — the prompt encodes the expert knowledge.
Prompts with Multiple Messages
Prompts can return multi-turn message sequences. This is useful for providing both a system-level instruction and a user-level query:
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts import base
mcp_server = FastMCP(name="CodeReviewer")
@mcp_server.prompt()
async def review_code(
language: str,
code_snippet: str,
focus_area: str = "general",
) -> list[base.Message]:
"""Generate a code review prompt for the given snippet.
Args:
language: The programming language of the code.
code_snippet: The code to review.
focus_area: What to focus on - security, performance, or general.
"""
focus_instructions = {
"security": (
"Focus on security vulnerabilities: injection attacks, "
"authentication bypasses, data exposure, and unsafe operations."
),
"performance": (
"Focus on performance issues: algorithmic complexity, "
"memory allocation, database query efficiency, and caching."
),
"general": (
"Review for code quality: readability, maintainability, "
"error handling, and adherence to best practices."
),
}
instructions = focus_instructions.get(focus_area, focus_instructions["general"])
return [
base.UserMessage(
content=f"""Review the following {language} code.
{instructions}
Provide specific, actionable feedback with line references.
Code to review:
{code_snippet}"""
),
]
Prompts That Reference Resources
A powerful pattern is combining prompts with resources. The prompt can instruct the agent to read specific resources as part of its workflow:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
@mcp_server.prompt()
async def debug_slow_query(query: str) -> str:
"""Generate a debugging workflow for a slow database query.
Args:
query: The SQL query that is running slowly.
"""
return f"""You are debugging a slow SQL query. Follow this workflow:
1. Read the resource at metrics://database/current to understand current
database load and connection counts.
2. Run the following query through the query_db tool to get the execution plan:
EXPLAIN ANALYZE {query}
3. Read the resource at config://database/indexes to see which indexes
exist on the relevant tables.
4. Based on the execution plan and available indexes, identify:
- Full table scans that could be eliminated with an index
- Join operations that lack supporting indexes
- Sort operations on unindexed columns
- Subqueries that could be rewritten as joins
5. Suggest specific CREATE INDEX statements and query rewrites.
Estimate the expected improvement for each suggestion."""
This prompt ties together tools and resources into a coherent debugging workflow. The agent receives a step-by-step playbook tailored to the server's capabilities.
Prompt Discovery and Arguments
Agents discover available prompts by calling prompts/list:
# What the agent sees from prompts/list
prompts = [
{
"name": "analyze_table",
"description": "Generate instructions for analyzing a database table.",
"arguments": [
{
"name": "table_name",
"description": "The name of the table to analyze.",
"required": True,
}
],
},
{
"name": "debug_slow_query",
"description": "Generate a debugging workflow for a slow query.",
"arguments": [
{
"name": "query",
"description": "The SQL query that is running slowly.",
"required": True,
}
],
},
]
The agent (or a human user in a chat interface) selects a prompt, fills in the arguments, and the prompt generates a context-rich instruction that the agent can follow. This makes MCP servers self-documenting — they ship not just capabilities but also the knowledge of how to use them effectively.
When to Use Prompts vs System Instructions
Use MCP prompts when the instruction template is tightly coupled to the server's tools and should travel with the server. Use agent-level system instructions when the guidance is about the agent's personality, constraints, or cross-server behavior. MCP prompts are portable — if you move the server to a different agent, the prompts come with it.
FAQ
Can prompts include images or other media?
Yes. MCP prompt messages can include image content alongside text. The message content array supports both text and image content types. This is useful for prompts that guide the agent through visual analysis tasks using the server's image-processing tools.
How are prompts different from just putting instructions in the agent's system prompt?
MCP prompts are server-defined and server-distributed. When you update a prompt on the server, every connected agent gets the new version without being redeployed. System prompt instructions are baked into the agent configuration. Prompts also accept arguments, making them dynamic templates rather than static text.
Can an agent call prompts automatically, or does a human need to select them?
In the current MCP specification, prompts are primarily user-facing — they are designed to be selected by a human in a chat interface or IDE. However, nothing prevents an agent runtime from programmatically selecting and invoking prompts based on the current task context. The protocol supports both interaction models.
#MCP #Prompts #AIAgents #PromptEngineering #AgenticAI #LearnAI #AIEngineering
CallSphere Team
Expert insights on AI voice agents and customer communication automation.
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.