MCP Core Building Blocks: Tools, Resources & Prompts (With Diagrams)
Master the three pillars of MCP functionality — Tools (model-controlled functions), Resources (app-controlled data), and Prompts (user-controlled templates).
MCP Core Building Blocks
The Model Context Protocol defines three core primitives -- Tools, Resources, and Prompts -- that form the building blocks of every MCP server. Each primitive serves a distinct purpose and is controlled by a different actor in the system. Together, they provide a complete interface for AI models to interact with external systems.
Understanding these building blocks is essential for both using and building MCP servers effectively.
The Three Primitives at a Glance
┌────────────────────────────────────────────────────────┐
│ MCP SERVER │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ TOOLS │ │ RESOURCES │ │ PROMPTS │ │
│ │ │ │ │ │ │ │
│ │ Model calls │ │ App reads │ │ User picks │ │
│ │ functions │ │ data into │ │ workflow │ │
│ │ to act │ │ context │ │ templates │ │
│ │ │ │ │ │ │ │
│ │ Examples: │ │ Examples: │ │ Examples: │ │
│ │ search_code │ │ db schema │ │ code-review │ │
│ │ create_issue │ │ config file │ │ summarize │ │
│ │ send_message │ │ API docs │ │ debug-error │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ Controller: Controller: Controller: │
│ AI Model Application User │
└────────────────────────────────────────────────────────┘
| Primitive | Controller | Purpose | Side Effects | Discovery |
|---|---|---|---|---|
| Tools | AI Model | Perform actions | Yes (read, write, execute) | tools/list |
| Resources | Application | Provide context | No (read-only) | resources/list |
| Prompts | User | Invoke workflows | No (generates messages) | prompts/list |
Tools: Model-Controlled Functions
What Tools Are
Tools are the most commonly used MCP primitive. A tool is a function that the AI model can decide to invoke based on the user's request. When the model calls a tool, the MCP server executes it and returns the result.
Tools can perform any action: search files, query databases, create issues, send messages, run code, or interact with any external system. They are the primary mechanism by which AI models take action in the real world.
Tool Schema
Every tool is defined by three components:
- Name: A unique identifier (e.g.,
search_code,create_issue) - Description: A natural-language explanation of what the tool does and when to use it
- Input Schema: A JSON Schema defining the tool's parameters
{
"name": "search_issues",
"description": "Search GitHub issues in a repository by keyword, label, or state. Use this when the user wants to find existing issues or check for duplicates. Returns issue number, title, state, author, and labels for each match.",
"inputSchema": {
"type": "object",
"properties": {
"owner": {
"type": "string",
"description": "Repository owner (username or organization)"
},
"repo": {
"type": "string",
"description": "Repository name"
},
"query": {
"type": "string",
"description": "Search keywords to match against issue titles and bodies"
},
"state": {
"type": "string",
"enum": ["open", "closed", "all"],
"default": "open",
"description": "Filter by issue state"
},
"labels": {
"type": "array",
"items": { "type": "string" },
"description": "Filter by label names"
},
"maxResults": {
"type": "number",
"default": 20,
"description": "Maximum number of results to return"
}
},
"required": ["owner", "repo", "query"]
}
}
How Tools Work: The Full Flow
1. Discovery
Client → Server: tools/list
Server → Client: [{name: "search_issues", description: "...", inputSchema: {...}}, ...]
2. Tool descriptions added to AI model's context
Host → Model: "You have these tools available: search_issues - Search GitHub issues..."
3. User makes a request
User → Host: "Are there any open bugs related to authentication?"
4. Model decides to use a tool
Model → Host: tool_call(search_issues, {owner: "myorg", repo: "myapp", query: "authentication bug"})
5. Host routes through MCP client
Client → Server: {"method": "tools/call", "params": {"name": "search_issues", "arguments": {...}}}
6. Server executes and returns result
Server → Client: {"result": {"content": [{"type": "text", "text": "Found 3 issues:\n#42 - Auth timeout..."}]}}
7. Model incorporates result
Model → User: "I found 3 open authentication bugs: #42 is about timeouts..."
Implementing Tools
TypeScript:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { z } from "zod";
const server = new McpServer({ name: "github-server", version: "1.0.0" });
// Simple tool with basic parameters
server.tool(
"search_issues",
"Search GitHub issues in a repository by keyword, label, or state.",
{
owner: z.string().describe("Repository owner"),
repo: z.string().describe("Repository name"),
query: z.string().describe("Search keywords"),
state: z.enum(["open", "closed", "all"]).default("open").describe("Issue state filter"),
},
async ({ owner, repo, query, state }) => {
const response = await fetch(
`https://api.github.com/search/issues?q=${encodeURIComponent(query)}+repo:${owner}/${repo}+state:${state}`,
{ headers: { Authorization: `Bearer ${process.env.GITHUB_TOKEN}` } }
);
const data = await response.json();
const results = data.items.map(
(issue: any) => `#${issue.number} - ${issue.title} (${issue.state})`
).join("\n");
return {
content: [{
type: "text",
text: results || "No issues found matching your query.",
}],
};
}
);
Python:
from mcp.server.fastmcp import FastMCP
import httpx
mcp = FastMCP("github-server")
@mcp.tool()
async def search_issues(
owner: str,
repo: str,
query: str,
state: str = "open"
) -> str:
"""Search GitHub issues in a repository by keyword, label, or state.
Use this when the user wants to find existing issues or check for duplicates.
Args:
owner: Repository owner (username or organization)
repo: Repository name
query: Search keywords to match against issue titles and bodies
state: Filter by issue state (open, closed, all)
"""
async with httpx.AsyncClient() as client:
response = await client.get(
"https://api.github.com/search/issues",
params={"q": f"{query} repo:{owner}/{repo} state:{state}"},
headers={"Authorization": f"Bearer {os.environ['GITHUB_TOKEN']}"},
)
data = response.json()
issues = data.get("items", [])
if not issues:
return "No issues found matching your query."
return "\n".join(
f"#{issue['number']} - {issue['title']} ({issue['state']})"
for issue in issues
)
Tool Annotations
The March 2025 specification revision introduced tool annotations -- metadata that helps AI models and host applications understand tool behavior:
server.tool(
"delete_file",
"Permanently delete a file from the filesystem",
{ path: z.string().describe("Path to the file to delete") },
async ({ path }) => {
await fs.unlink(path);
return { content: [{ type: "text", text: `Deleted: ${path}` }] };
},
{
annotations: {
readOnlyHint: false, // This tool modifies state
destructiveHint: true, // This action cannot be undone
idempotentHint: true, // Calling it twice has the same effect
openWorldHint: false, // Operates on local filesystem only
}
}
);
| Annotation | Type | Meaning |
|---|---|---|
readOnlyHint | boolean | true if the tool only reads data, false if it can modify state |
destructiveHint | boolean | true if the tool's effects cannot be undone |
idempotentHint | boolean | true if calling the tool multiple times with the same arguments produces the same result |
openWorldHint | boolean | true if the tool interacts with external systems beyond the server's control |
Hosts can use these annotations to:
- Require user consent before executing destructive tools
- Allow read-only tools to execute without confirmation
- Retry idempotent tools on transient failures
- Apply additional security checks for open-world tools
Tool Return Types
Tools return content in structured format:
{
"content": [
{
"type": "text",
"text": "Found 3 matching files..."
}
],
"isError": false
}
Content types include:
| Type | Description | Example |
|---|---|---|
text | Plain text content | Search results, status messages |
image | Base64-encoded image | Screenshots, charts, diagrams |
audio | Base64-encoded audio | Voice recordings, generated speech |
resource | Embedded resource reference | Link to a resource URI |
Multiple content items can be returned in a single response:
return {
content: [
{ type: "text", text: "Screenshot of the page:" },
{ type: "image", data: base64Screenshot, mimeType: "image/png" },
{ type: "text", text: "The page shows a login form with an error message." },
],
};
Resources: Application-Controlled Data
What Resources Are
Resources are data that MCP servers expose for applications to read into the AI model's context. Unlike tools, resources are not called by the model -- they are read by the host application and included as context when appropriate.
Resources provide background information that helps the AI model make better decisions: database schemas, configuration files, API documentation, project structures, and any other data that enriches context.
Resource Schema
Each resource is defined by:
- URI: A unique identifier following URI format (e.g.,
file:///src/config.json,schema://main/users) - Name: A human-readable name
- Description: An explanation of what the resource contains
- MIME type: The content type (e.g.,
text/plain,application/json)
{
"uri": "schema://main/tables",
"name": "Database Schema",
"description": "Complete schema of the main database including all tables, columns, types, and relationships",
"mimeType": "text/plain"
}
Static Resources vs Resource Templates
Static resources have fixed URIs and return specific content:
// Static resource: always returns the same schema
server.resource(
"schema://main",
"Main database schema",
async () => ({
contents: [{
uri: "schema://main",
mimeType: "text/plain",
text: await getDatabaseSchema(),
}],
})
);
Resource templates use URI patterns with placeholders for dynamic content:
// Resource template: returns schema for a specific table
server.resourceTemplate(
"schema://main/{tableName}",
"Schema for a specific database table",
async ({ tableName }) => ({
contents: [{
uri: `schema://main/${tableName}`,
mimeType: "application/json",
text: JSON.stringify(await getTableSchema(tableName)),
}],
})
);
Implementing Resources
TypeScript:
const server = new McpServer({ name: "project-server", version: "1.0.0" });
// Static resource: project configuration
server.resource(
"config://project",
"Project configuration and settings",
async () => ({
contents: [{
uri: "config://project",
mimeType: "application/json",
text: await fs.readFile("package.json", "utf-8"),
}],
})
);
// Static resource: API documentation
server.resource(
"docs://api",
"API endpoint documentation for the current project",
async () => ({
contents: [{
uri: "docs://api",
mimeType: "text/markdown",
text: await fs.readFile("docs/api.md", "utf-8"),
}],
})
);
// Resource template: specific file contents
server.resourceTemplate(
"file:///{path}",
"Read a file from the project directory",
async ({ path }) => ({
contents: [{
uri: `file:///${path}`,
mimeType: getMimeType(path),
text: await fs.readFile(path, "utf-8"),
}],
})
);
Python:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("project-server")
@mcp.resource("config://project")
def get_project_config() -> str:
"""Project configuration and settings."""
with open("package.json") as f:
return f.read()
@mcp.resource("docs://api")
def get_api_docs() -> str:
"""API endpoint documentation for the current project."""
with open("docs/api.md") as f:
return f.read()
@mcp.resource("schema://db/{table_name}")
def get_table_schema(table_name: str) -> str:
"""Schema for a specific database table."""
schema = fetch_table_schema(table_name)
return json.dumps(schema, indent=2)
Resource Discovery and Reading
The full resource flow:
// 1. Client discovers resources
// Request:
{"jsonrpc": "2.0", "id": 1, "method": "resources/list"}
// Response:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"resources": [
{
"uri": "schema://main",
"name": "Database Schema",
"description": "Complete schema of the main database",
"mimeType": "text/plain"
},
{
"uri": "config://project",
"name": "Project Configuration",
"description": "Project settings from package.json",
"mimeType": "application/json"
}
]
}
}
// 2. Client reads a specific resource
// Request:
{"jsonrpc": "2.0", "id": 2, "method": "resources/read", "params": {"uri": "schema://main"}}
// Response:
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"contents": [
{
"uri": "schema://main",
"mimeType": "text/plain",
"text": "CREATE TABLE users (\n id SERIAL PRIMARY KEY,\n email VARCHAR(255) UNIQUE,\n ..."
}
]
}
}
Resource Subscriptions
For dynamic resources, clients can subscribe to change notifications:
// Client subscribes to resource updates
{"jsonrpc": "2.0", "id": 3, "method": "resources/subscribe", "params": {"uri": "config://project"}}
// Server notifies when the resource changes
{"jsonrpc": "2.0", "method": "notifications/resources/updated", "params": {"uri": "config://project"}}
// Client reads the updated resource
{"jsonrpc": "2.0", "id": 4, "method": "resources/read", "params": {"uri": "config://project"}}
When to Use Resources vs Tools
| Scenario | Use Resource | Use Tool |
|---|---|---|
| Read database schema | Yes | No |
| Query database data | No | Yes |
| Provide API documentation | Yes | No |
| Search API endpoints | No | Yes |
| Show project configuration | Yes | No |
| Modify project configuration | No | Yes |
| Background context that aids reasoning | Yes | No |
| Action the user requested | No | Yes |
The key distinction: Resources provide context passively. Tools perform actions actively.
Prompts: User-Controlled Templates
What Prompts Are
Prompts are reusable message templates that users can invoke for common workflows. They encode expert knowledge into parameterized templates that generate structured messages for the AI model.
Unlike tools (which the model calls) and resources (which the application reads), prompts are explicitly chosen by the user. They represent predefined workflows like "review this code," "summarize this document," or "debug this error."
Prompt Schema
Each prompt is defined by:
- Name: A unique identifier (e.g.,
code-review,debug-error) - Description: What the prompt does
- Arguments: Optional parameters that customize the prompt
{
"name": "code-review",
"description": "Perform a structured code review with security, performance, and readability checks",
"arguments": [
{
"name": "code",
"description": "The code to review",
"required": true
},
{
"name": "language",
"description": "Programming language",
"required": false
},
{
"name": "focus",
"description": "Specific area to focus on (security, performance, readability, all)",
"required": false
}
]
}
Implementing Prompts
TypeScript:
const server = new McpServer({ name: "dev-prompts", version: "1.0.0" });
server.prompt(
"code-review",
"Perform a structured code review",
{
code: z.string().describe("The code to review"),
language: z.string().default("auto").describe("Programming language"),
focus: z.enum(["security", "performance", "readability", "all"]).default("all"),
},
async ({ code, language, focus }) => ({
messages: [
{
role: "user",
content: {
type: "text",
text: `Please review the following ${language !== "auto" ? language + " " : ""}code with a focus on ${focus}.
Use this checklist:
${focus === "all" || focus === "security" ? "## Security\n- [ ] Input validation\n- [ ] Authentication/authorization checks\n- [ ] No sensitive data exposure\n- [ ] SQL injection prevention\n- [ ] XSS prevention\n\n" : ""}
${focus === "all" || focus === "performance" ? "## Performance\n- [ ] Efficient algorithms\n- [ ] No unnecessary computation\n- [ ] Appropriate caching\n- [ ] Memory management\n\n" : ""}
${focus === "all" || focus === "readability" ? "## Readability\n- [ ] Clear naming conventions\n- [ ] Appropriate comments\n- [ ] Consistent formatting\n- [ ] Single responsibility\n\n" : ""}
\`\`\`${language !== "auto" ? language : ""}
${code}
\`\`\`
Provide specific, actionable feedback for each item.`,
},
},
],
})
);
server.prompt(
"debug-error",
"Systematically debug an error with context gathering",
{
error: z.string().describe("The error message or stack trace"),
context: z.string().optional().describe("Additional context about when the error occurs"),
},
async ({ error, context }) => ({
messages: [
{
role: "user",
content: {
type: "text",
text: `I'm encountering the following error and need help debugging it:
\`\`\`
${error}
\`\`\`
${context ? `Additional context: ${context}\n\n` : ""}Please help me debug this by:
1. Identifying the root cause from the error message
2. Suggesting potential fixes in order of likelihood
3. Recommending diagnostic steps if the cause is unclear
4. Suggesting preventive measures to avoid this error in the future`,
},
},
],
})
);
Python:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("dev-prompts")
@mcp.prompt()
def code_review(code: str, language: str = "auto", focus: str = "all") -> str:
"""Perform a structured code review with security, performance, and readability checks.
Args:
code: The code to review
language: Programming language (auto-detected if not specified)
focus: Area to focus on (security, performance, readability, all)
"""
lang_prefix = f"{language} " if language != "auto" else ""
lang_fence = language if language != "auto" else ""
checklist = ""
if focus in ("all", "security"):
checklist += "## Security\n- [ ] Input validation\n- [ ] Auth checks\n- [ ] No data exposure\n\n"
if focus in ("all", "performance"):
checklist += "## Performance\n- [ ] Efficient algorithms\n- [ ] Appropriate caching\n\n"
if focus in ("all", "readability"):
checklist += "## Readability\n- [ ] Clear naming\n- [ ] Appropriate comments\n\n"
return f"""Please review the following {lang_prefix}code with a focus on {focus}.
{checklist}
```{lang_fence}
{code}
```
Provide specific, actionable feedback for each item."""
How Prompts Are Used
The prompt flow is distinct from tools and resources because the user initiates it:
1. Discovery
Client → Server: prompts/list
Server → Client: [{name: "code-review", description: "...", arguments: [...]}, ...]
2. User selects a prompt (via UI, slash command, or menu)
User → Host: /code-review (selects the prompt)
Host → User: "Please provide: code, language (optional), focus (optional)"
User → Host: {code: "function auth() {...}", language: "typescript", focus: "security"}
3. Host requests the prompt from the server
Client → Server: prompts/get {name: "code-review", arguments: {...}}
Server → Client: {messages: [{role: "user", content: {type: "text", text: "Please review..."}}]}
4. Host sends the generated messages to the AI model
Host → Model: [Generated prompt messages]
Model → User: "Here is my security-focused review of your code..."
Prompts in Practice
Prompts are particularly valuable for:
Standardized workflows: Ensuring that code reviews, debugging sessions, or analysis tasks follow a consistent structure across a team.
Expert knowledge encoding: Capturing the approach of experienced engineers in a template that anyone can use.
Quality assurance: Making sure important checks (security, performance, accessibility) are not overlooked.
Onboarding: Giving new team members access to workflow templates that embody team best practices.
How the Three Primitives Work Together
A Complete Example: Database Server
Here is how a database MCP server might use all three primitives:
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("database-server")
# === TOOLS: Actions the model can take ===
@mcp.tool()
async def query(sql: str) -> str:
"""Execute a read-only SQL query against the database.
Args:
sql: SQL SELECT query to execute
"""
if not sql.strip().upper().startswith("SELECT"):
return "Error: Only SELECT queries are allowed for safety."
results = await db.execute(sql)
return format_table(results)
@mcp.tool()
async def list_tables() -> str:
"""List all tables in the database with row counts."""
tables = await db.execute(
"SELECT table_name, n_live_tup FROM pg_stat_user_tables ORDER BY table_name"
)
return "\n".join(f"{t['table_name']} ({t['n_live_tup']} rows)" for t in tables)
@mcp.tool()
async def describe_table(table_name: str) -> str:
"""Get detailed column information for a specific table.
Args:
table_name: Name of the table to describe
"""
columns = await db.execute(f"""
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = '{table_name}'
ORDER BY ordinal_position
""")
return format_table(columns)
# === RESOURCES: Context the app can read ===
@mcp.resource("schema://full")
async def get_full_schema() -> str:
"""Complete database schema including tables, columns, types, and relationships."""
return await db.get_schema_dump()
@mcp.resource("schema://tables/{table_name}")
async def get_table_detail(table_name: str) -> str:
"""Detailed schema for a specific table including indexes and constraints."""
return await db.get_table_detail(table_name)
@mcp.resource("stats://database")
async def get_db_stats() -> str:
"""Database performance statistics and health metrics."""
return await db.get_stats()
# === PROMPTS: Workflow templates ===
@mcp.prompt()
def analyze_table(table_name: str) -> str:
"""Analyze a database table's structure, data quality, and suggest improvements.
Args:
table_name: The table to analyze
"""
return f"""Please analyze the '{table_name}' table by:
1. Reviewing its schema structure (use describe_table tool)
2. Checking data distribution (run sample queries)
3. Identifying potential issues:
- Missing indexes
- Nullable columns that should not be
- Data type choices
- Naming conventions
4. Suggesting improvements with specific ALTER TABLE statements"""
@mcp.prompt()
def write_query(description: str) -> str:
"""Help write an SQL query based on a natural language description.
Args:
description: What the query should do
"""
return f"""Help me write a SQL query for the following:
{description}
First, use the list_tables and describe_table tools to understand the schema.
Then write the query step by step, explaining each part.
Finally, execute the query to verify it works."""
How They Interact
User opens Claude Desktop with database server connected
1. Application reads resources at startup:
→ Reads schema://full to get the database schema
→ This schema is added to the AI's context
2. User invokes a prompt:
→ User: /write-query "Get the top 10 customers by revenue this quarter"
→ Prompt generates structured instructions for the model
3. Model uses tools to execute:
→ Model calls list_tables() to see available tables
→ Model calls describe_table("customers") for column details
→ Model calls describe_table("orders") for order details
→ Model calls query("SELECT c.name, SUM(o.amount)...")
→ Model presents the results to the user
Design Guidelines
Writing Effective Tool Descriptions
Good tool descriptions are the most important factor in AI tool use success. The model decides which tool to call based on descriptions alone.
| Guideline | Bad Example | Good Example |
|---|---|---|
| Be specific | "Search for things" | "Search GitHub issues by keyword, label, or state" |
| Explain when to use | "Get data" | "Use this when the user wants to find existing issues or check for duplicates" |
| Describe output | "Returns results" | "Returns issue number, title, state, author, and labels for each match" |
| Note limitations | (none) | "Maximum 100 results per query. Only searches titles and bodies, not comments" |
Choosing Between Primitives
When designing an MCP server, use this decision framework:
Does the AI need to perform an action?
├── Yes → Use a Tool
│ Does it modify state?
│ ├── Yes → Set readOnlyHint: false
│ └── No → Set readOnlyHint: true
│
└── No → Does the AI need background context?
├── Yes → Use a Resource
│ Is it static?
│ ├── Yes → Static resource
│ └── No → Resource template + subscriptions
│
└── Is this a reusable workflow pattern?
├── Yes → Use a Prompt
└── No → Probably not needed in MCP
Summary
The three MCP primitives -- Tools, Resources, and Prompts -- provide a complete interface for AI-tool communication:
- Tools let AI models take action (model-controlled)
- Resources give AI models context (application-controlled)
- Prompts give users workflow shortcuts (user-controlled)
Together, they enable rich, capable AI applications that can reason about data, take action, and follow structured workflows.
Continue learning:
- MCP Architecture -- How clients, servers, and hosts work together
- Creating Custom Tools -- Advanced tool development guide
- What Is an MCP Server? -- Server fundamentals
- Browse MCP Servers -- See these building blocks in action
Frequently Asked Questions
What are the three building blocks of MCP?
The three building blocks are Tools (functions the AI model can call to perform actions), Resources (data that applications can read to provide context), and Prompts (reusable templates for common workflows). Each serves a different purpose and is controlled by a different actor: tools by the model, resources by the application, and prompts by the user.
What is an MCP tool?
An MCP tool is a function exposed by an MCP server that an AI model can choose to call. Tools have a name, description, and a JSON Schema defining their input parameters. Examples include search_files, create_issue, query_database, and send_message. Tools can have side effects — they can read data, write data, or trigger actions in external systems.
What is an MCP resource?
An MCP resource is a data source exposed by an MCP server that applications can read to provide context to the AI model. Resources are identified by URIs (like file:///path or schema://main) and are read-only. They provide background information — database schemas, configuration files, API documentation — that helps the AI make better decisions.
What is an MCP prompt?
An MCP prompt is a reusable template exposed by an MCP server that users can invoke for common workflows. Prompts can accept parameters and generate structured messages for the AI model. Examples include code-review (with a diff parameter), summarize-pr, and debug-error. They encode expert workflows into shareable templates.
Who controls each MCP primitive?
Each primitive has a different controller: Tools are model-controlled (the AI decides when to use them based on context), Resources are application-controlled (the host app decides when to read them and include them in context), and Prompts are user-controlled (the user explicitly selects which prompt to invoke).
Can an MCP server expose tools without resources or prompts?
Yes. MCP servers declare which primitives they support during capability negotiation. A server can expose only tools, only resources, only prompts, or any combination. Most servers expose tools; fewer expose resources; prompts are the least commonly used primitive. The server declares its capabilities during the initialization handshake.
How are tool parameters defined in MCP?
Tool parameters are defined using JSON Schema. The schema specifies parameter names, types, descriptions, required fields, defaults, enums, and validation constraints. This schema is sent to the AI model as part of the tool description, allowing the model to construct valid tool call arguments.
What are tool annotations in MCP?
Tool annotations are metadata about tool behavior added in the March 2025 specification update. They describe whether a tool is read-only or can make changes (readOnlyHint), whether it's destructive (destructiveHint), whether it's idempotent (idempotentHint), and whether it accesses the open world (openWorldHint). These help AI models and hosts make better decisions about tool usage.
What is the difference between tools and resources?
Tools are functions the AI model actively calls to perform actions (search, create, modify). Resources are passive data the application reads to provide context (schemas, configs, docs). Tools can have side effects; resources are read-only. Tools are invoked during conversation; resources are typically loaded at startup or on demand.
Can resources be dynamic in MCP?
Yes. MCP supports both static resources (fixed URI, fixed content) and dynamic resources through resource templates. Resource templates use URI patterns with placeholders (like 'users://{userId}/profile') that can be resolved with specific values. Servers can also notify clients when resource content changes through subscription notifications.
Related Guides
A comprehensive breakdown of the MCP architecture — how clients, servers, hosts, and transports work together to enable AI-tool communication.
Learn how to create powerful custom tools and resources for your MCP servers — from simple functions to complex data providers with proper schemas.
Learn what MCP servers are, how they expose tools/resources/prompts to AI applications, and see real-world examples of popular MCP servers.