MCP Fundamentals
Pillar Guide

MCP Glossary: Every Model Context Protocol Term Defined

The definitive glossary of MCP terminology. Clear, concise definitions for every Model Context Protocol term — from tools and resources to transports and capabilities.

15 min read
Updated February 26, 2026
By MCP Server Spot

MCP Glossary: Every Model Context Protocol Term Defined

This glossary is the definitive reference for every term in the Model Context Protocol ecosystem. Whether you are evaluating MCP for the first time, building your first server, or architecting enterprise-scale AI integrations, this page provides clear, concise definitions for every concept you will encounter. Each entry includes a short definition followed by a deeper explanation with links to related guides.

The Model Context Protocol has introduced a specific vocabulary that draws from distributed systems, API design, and AI engineering. Some terms -- like "tool" and "resource" -- carry precise meanings in MCP that differ from their everyday usage. Others -- like "sampling" and "roots" -- are unique to the protocol. This glossary clarifies all of them.

How to use this glossary: Terms are organized by category rather than alphabetically, so you can read through related concepts together. If you are looking for a specific term, use your browser's find feature (Ctrl+F or Cmd+F) to jump directly to it.

For a broader introduction to MCP, start with What Is the Model Context Protocol?. For a visual walkthrough of how the pieces fit together, see MCP Architecture Explained.


Core Protocol Terms

Model Context Protocol (MCP)

The Model Context Protocol is an open standard developed by Anthropic that provides a universal interface for AI applications to connect with external tools, data sources, and services. MCP uses JSON-RPC 2.0 messaging over standardized transport layers to enable any AI application to communicate with any compatible server.

MCP was publicly announced on November 25, 2024, and released under the MIT License. Its purpose is to solve the N x M integration problem that plagued the AI industry: without a shared protocol, every combination of AI application and external tool required a custom integration. With MCP, each AI application implements one client and each tool implements one server, reducing the total number of integrations from N x M to N + M.

The protocol has been adopted by major AI platforms including Anthropic (Claude), OpenAI (ChatGPT), Google (Gemini), Microsoft (VS Code Copilot), Cursor, Windsurf, and many others. This broad adoption means that an MCP server built once can serve any compliant AI application. For a comprehensive introduction, see What Is the Model Context Protocol?. For the background on how and why the protocol was created, see MCP History.

JSON-RPC 2.0

JSON-RPC 2.0 is the message format used by MCP for all communication between clients and servers. It is a lightweight remote procedure call protocol that encodes requests, responses, and notifications as JSON objects.

Every MCP message is a JSON-RPC 2.0 message. The specification defines three message types: requests (which expect a response, identified by an id field), responses (which carry a result or error for a specific request), and notifications (one-way messages that do not expect a response). This structure gives MCP a clean, predictable communication pattern.

JSON-RPC was chosen for MCP because it is simple, language-agnostic, and widely supported. Unlike REST APIs that require understanding of HTTP methods, URL patterns, and status codes, JSON-RPC uses a single message structure for all interactions. This simplicity makes it straightforward to implement MCP clients and servers in any programming language. Here is what a typical tool call looks like at the JSON-RPC level:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "search_files",
    "arguments": {
      "query": "authentication",
      "path": "/src"
    }
  }
}

Capability Negotiation

Capability negotiation is the process during MCP connection initialization where the client and server exchange information about their supported features. This handshake ensures both sides know what the other can do before any work begins.

During initialization, the client sends an initialize request that includes its protocol version and a list of capabilities it supports (such as sampling or roots). The server responds with its own protocol version and capabilities (such as tools, resources, or prompts). Both sides negotiate to the highest mutually supported protocol version. After this exchange, the client sends an initialized notification to confirm the connection is ready for normal operation.

Capability negotiation prevents errors that would occur if a client tried to use a feature the server does not support, or vice versa. For example, if a server does not declare the tools capability, the client knows not to send tools/list or tools/call requests. This design keeps MCP extensible -- new capabilities can be added in future specification versions without breaking existing implementations. For a deeper explanation, see MCP Architecture Explained.

Specification Version

The specification version identifies which revision of the MCP protocol a client or server implements. MCP uses date-based versioning, and the version is exchanged during capability negotiation so both sides can agree on a compatible protocol level.

The major specification versions are:

  • 2024-11-05 -- The initial public release, establishing the core protocol with tools, resources, prompts, stdio transport, and SSE transport.
  • 2025-03-26 -- A significant update that introduced Streamable HTTP transport, OAuth 2.1 authentication for remote servers, and tool annotations.
  • 2025-06-18 -- The latest revision, which added elicitation (servers can ask users for input), structured output for tools, and audio content support.

Each version is designed to be backward-compatible. When a client supporting version 2025-06-18 connects to a server supporting 2025-03-26, they negotiate down to 2025-03-26 and operate using the features available at that level. This versioning approach lets the ecosystem evolve without forcing all implementations to upgrade simultaneously. For a timeline of how MCP has evolved, see MCP History.


Architecture Terms

Host Application

A host application is the user-facing AI application that contains one or more MCP clients and manages their connections to MCP servers. Hosts are what users interact with directly -- they provide the interface, run the AI model, and coordinate all MCP communication.

Examples of host applications include Claude Desktop, Cursor IDE, VS Code with GitHub Copilot, Windsurf, Zed, and any custom application built with an MCP SDK. A single host can connect to multiple MCP servers simultaneously. For instance, Claude Desktop might connect to a filesystem server, a GitHub server, and a database server at the same time, giving the AI model access to all three sets of tools.

The host bears significant responsibility in the MCP architecture. It manages security by enforcing user consent before allowing tool execution. It handles routing by directing tool calls to the correct MCP client (and therefore the correct server). It controls context by deciding which resources to read and include in the AI model's prompt. And it manages the lifecycle of all client-server connections, including startup, error handling, and graceful shutdown. For detailed guidance on configuring hosts, see MCP with Claude Desktop and MCP with Cursor and VS Code.

MCP Client

An MCP client is a protocol-level component inside a host application that maintains a one-to-one connection with a single MCP server. The client handles all protocol details: initialization, capability negotiation, message serialization, and transport management.

It is important to distinguish the client from the host. The host is the application the user sees and interacts with. The client is an internal component -- usually invisible to the user -- that handles the communication protocol with one server. A host contains one client for each connected server. If Claude Desktop connects to three MCP servers, it creates and manages three separate MCP client instances.

The client's responsibilities include sending the initialize request when a connection is established, tracking which capabilities the server supports, serializing tool call requests into JSON-RPC messages, deserializing responses, and managing the transport layer (whether that is stdio pipes or HTTP connections). In most cases, developers do not interact with the client directly -- the MCP SDKs handle client implementation. For the architectural details, see MCP Architecture Explained.

MCP Server

An MCP server is a lightweight program that exposes tools, resources, and prompts to AI applications through the Model Context Protocol. Servers are the bridge between AI models and external systems -- they translate MCP requests into actions on databases, APIs, filesystems, and other services.

Each MCP server typically wraps a specific domain or service. A GitHub MCP server exposes tools for searching issues, creating pull requests, and reading repository files. A PostgreSQL MCP server exposes tools for running queries and resources for reading database schemas. A filesystem MCP server exposes tools for reading, writing, and searching files on the local machine.

Servers can be local (running as a subprocess on the user's machine via stdio transport) or remote (running on a network-accessible server via HTTP transport). Local servers are simpler to set up and keep data on the user's machine. Remote servers enable team sharing, centralized management, and access to cloud services. A server declares which of the three primitives it supports during capability negotiation -- it might expose only tools, only resources, only prompts, or any combination. For a detailed explanation, see What Is an MCP Server?. To browse available servers, visit the MCP Server Directory.


Primitives

Tool

A tool is a model-controlled function exposed by an MCP server that the AI model can choose to invoke during a conversation. Tools are the primary mechanism by which AI models take action in the real world through MCP.

Each tool has three components: a name (a unique identifier like search_files or create_issue), a description (natural-language text explaining what the tool does and when to use it), and an input schema (a JSON Schema definition specifying the tool's parameters, their types, and validation rules). The AI model reads these descriptions and schemas to decide which tool to call and how to construct valid arguments.

Tools can perform any operation: reading files, querying databases, creating records, sending messages, running computations, or interacting with external APIs. Unlike resources, tools can have side effects -- they can modify state, create new data, or trigger actions in external systems. The term "model-controlled" means the AI model itself decides when to invoke a tool based on the conversation context, rather than the user or application explicitly selecting it. For a complete guide to tools and how to build them, see MCP Core Building Blocks.

Resource

A resource is an application-controlled data source exposed by an MCP server, identified by a URI, that provides contextual information to AI models. Resources are read-only and are designed to enrich the AI model's understanding of the environment it is working in.

Resources serve a fundamentally different purpose than tools. While tools perform actions, resources provide passive context. A database MCP server might expose the full database schema as a resource so the AI model understands the table structure before writing queries. A project server might expose configuration files, API documentation, or dependency lists as resources. The host application decides when to read resources and include them in the model's context.

Each resource is identified by a URI (such as schema://main/tables or file:///src/config.json) and has an associated MIME type. Resources can be static (fixed content at a fixed URI) or dynamic (content that changes and can be subscribed to for updates). The key distinction from tools is control: resources are application-controlled, meaning the host application (not the AI model) decides when and how to read them. For examples and implementation patterns, see MCP Core Building Blocks.

Prompt

A prompt is a user-controlled template exposed by an MCP server that generates structured messages for common workflows. Prompts encode expert knowledge into reusable, parameterized templates that users can invoke explicitly.

Prompts occupy a unique position in the MCP primitive hierarchy. Tools are controlled by the model (the AI decides when to call them). Resources are controlled by the application (the host decides when to read them). Prompts are controlled by the user -- the user explicitly chooses to invoke a prompt, typically through a UI element like a slash command or dropdown menu.

A prompt has a name, a description, and optional arguments. When invoked, it returns one or more messages that the host sends to the AI model. For example, a code-review prompt might accept a code argument and a focus argument (security, performance, or readability), then generate a structured review request with a detailed checklist. This ensures consistent, thorough workflows across a team without requiring each user to craft the perfect prompt from scratch. For implementation examples, see MCP Core Building Blocks.

Tool Annotations

Tool annotations are metadata hints attached to tool definitions that describe the tool's behavior characteristics. They were introduced in the 2025-03-26 specification revision to help host applications and AI models make better decisions about tool usage.

The four standard annotations are:

  • readOnlyHint -- Indicates whether the tool only reads data (true) or can modify state (false). A file search tool would be read-only; a file delete tool would not.
  • destructiveHint -- Indicates whether the tool's effects are irreversible. Deleting a file is destructive; creating a file is not.
  • idempotentHint -- Indicates whether calling the tool multiple times with the same arguments produces the same result. Reading a file is idempotent; appending to a file is not.
  • openWorldHint -- Indicates whether the tool interacts with external systems beyond the server's direct control. A tool that queries a public API is open-world; a tool that reads local configuration is not.

These annotations are called "hints" because they are informational rather than enforced. Host applications can use them to implement smart policies -- for example, allowing read-only tools to execute without user confirmation while requiring explicit consent for destructive operations. They also help AI models reason about the consequences of tool calls before executing them. For detailed examples of annotation usage, see MCP Core Building Blocks.

Resource Template

A resource template is a URI pattern with placeholders that allows dynamic resolution of resource content. Templates use URI Template syntax (as defined in RFC 6570) to let clients request specific resources by filling in parameter values.

While static resources have fixed URIs and return predetermined content, resource templates define patterns like schema://main/tables/PLACEHOLDER where the placeholder is replaced with a specific value at request time. This enables a single template definition to serve many different resources. A database server might define a template schema://main/tables/PLACEHOLDER that returns the detailed schema for whatever table name the client provides.

Resource templates are particularly useful when the set of available resources is large or dynamic. Instead of listing every possible resource individually (which might mean hundreds of table schemas or thousands of file paths), the server exposes a template pattern and clients fill in the specifics. The host application can present these templates as parameterized options in the UI, allowing users or the application itself to request exactly the resource they need.


Transports

Transport

A transport is the communication layer that carries JSON-RPC messages between MCP clients and servers. The transport handles the low-level details of message delivery -- serialization, framing, connection management -- while the higher-level protocol logic remains transport-agnostic.

MCP is designed to be transport-independent. The same tools, resources, and prompts work identically regardless of whether messages travel over local process pipes or HTTP connections. This separation of concerns means server developers write their tool logic once, and it works across all transport types.

The MCP specification currently defines three transport mechanisms: stdio for local communication, SSE (Server-Sent Events) for remote communication (from the original spec), and Streamable HTTP for modern remote deployments (introduced in the 2025-03-26 revision). The choice of transport depends on the deployment scenario -- local desktop tools use stdio, while cloud-hosted services use HTTP-based transports. For an in-depth comparison, see Local vs Remote MCP Servers.

stdio Transport

The stdio (standard input/output) transport is a local communication mechanism where the MCP host launches the server as a child process and exchanges messages through the process's standard input and output streams. It is the simplest and most common transport for local MCP servers.

When using stdio transport, the host application spawns the MCP server as a subprocess (for example, by running npx @modelcontextprotocol/server-filesystem /home/user/projects or python my_server.py). The host writes JSON-RPC messages to the server's stdin and reads responses from the server's stdout. Each message is delimited by newlines, making the framing straightforward.

stdio transport has several advantages for local servers: it requires no network configuration, keeps all data on the local machine, has minimal latency (no HTTP overhead), and the host can manage the server's lifecycle directly (start, stop, restart). The downside is that it only works locally -- the host and server must run on the same machine. This is the transport used by Claude Desktop, Cursor, and most other desktop AI applications when connecting to local MCP servers. For configuration examples, see MCP with Claude Desktop.

SSE Transport

The SSE (Server-Sent Events) transport is a remote communication mechanism where the server sends events to the client over a long-lived HTTP connection while the client sends requests via separate HTTP POST calls. It was part of the original MCP specification for enabling remote server deployments.

SSE transport works by establishing two communication channels: a persistent HTTP connection from the server to the client for streaming events (using the Server-Sent Events standard), and standard HTTP POST requests from the client to the server for sending commands. This asymmetric design means the server can push notifications and responses to the client in real time without the client polling.

While SSE transport enabled the first generation of remote MCP servers, it has been largely superseded by Streamable HTTP transport (introduced in the 2025-03-26 specification). SSE required maintaining a persistent connection, which could be problematic with load balancers, proxies, and serverless infrastructure. The newer Streamable HTTP transport addresses these limitations while maintaining full backward compatibility. Existing SSE-based servers continue to work, but new remote server implementations are encouraged to use Streamable HTTP instead. For more on remote deployment patterns, see Deploying Remote MCP Servers.

Streamable HTTP Transport

Streamable HTTP is the newest MCP transport mechanism, introduced in the 2025-03-26 specification revision. It uses standard HTTP request-response patterns with optional streaming, designed to work reliably with modern web infrastructure including load balancers, CDNs, and serverless platforms.

Unlike the SSE transport, which required a persistent connection, Streamable HTTP operates with regular HTTP requests. The client sends a POST request to the server's MCP endpoint, and the server can respond either with a single JSON response (for simple request-response patterns) or by upgrading the response to a Server-Sent Events stream (for long-running operations or server-initiated notifications). This flexibility means the transport adapts to the needs of each interaction.

Streamable HTTP was designed to solve practical deployment challenges. Persistent connections (as required by SSE) are difficult to maintain through load balancers, often incompatible with serverless functions, and can be dropped by aggressive proxies or firewalls. Streamable HTTP works with standard HTTP infrastructure out of the box. It also supports stateless operation (where each request is independent) or stateful sessions (where a session ID ties requests together), giving server implementers flexibility in their architecture. For deployment strategies, see Deploying Remote MCP Servers and Local vs Remote MCP Servers.


Capabilities

Sampling

Sampling is an MCP capability that allows servers to request AI model completions from the host application. When a server has sampling capability, it can ask the host to run a prompt through its language model and return the result, enabling agentic behaviors within the server itself.

Sampling inverts the typical MCP flow. Normally, the AI model calls tools on the server. With sampling, the server calls back to the AI model through the host. This enables powerful patterns: an MCP server could analyze a dataset, use sampling to ask the AI model to interpret the results, then use that interpretation to decide what to analyze next -- all within a single tool execution.

The host application retains full control over sampling requests. It can inspect the prompt the server wants to send, apply rate limits, require user approval, or reject the request entirely. This human-in-the-loop design ensures that servers cannot use the AI model without appropriate oversight. Sampling is declared as a client capability during initialization -- if the client does not support sampling, the server knows not to attempt it. For a discussion of how sampling enables agent workflows, see MCP for AI Agents.

Roots

Roots is an MCP capability where the client informs the server about relevant filesystem or workspace locations. By providing roots, the client gives the server context about where it should focus its operations, particularly for file-based tools.

When a user opens a project in an IDE like Cursor or VS Code, the host application knows which directory the user is working in. By declaring roots (for example, /home/user/projects/my-app), the host tells connected MCP servers where relevant files are located. A filesystem server can then scope its operations to those directories rather than requiring the user to specify full paths every time.

Roots are declared as a client capability and can be updated during a session. If the user opens a new workspace or changes their working directory, the host can send a notification with updated roots. Servers that support roots can use this information to provide better defaults, restrict operations to appropriate directories, and offer more relevant suggestions. Roots are particularly important for MCP in software development workflows where file context is essential.


Development Tools

FastMCP

FastMCP is a high-level Python framework for building MCP servers with minimal boilerplate code. It is the recommended approach for building MCP servers in Python and is included as part of the official MCP Python SDK.

FastMCP simplifies server development by using Python decorators to define tools, resources, and prompts. Instead of manually constructing JSON Schemas, handling protocol messages, and managing transport connections, developers write standard Python functions and decorate them with @mcp.tool(), @mcp.resource(), or @mcp.prompt(). FastMCP automatically generates the JSON Schema from Python type hints, handles parameter validation, manages the protocol lifecycle, and sets up the transport layer.

Here is a minimal FastMCP server:

from mcp.server.fastmcp import FastMCP

mcp = FastMCP("my-server")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

if __name__ == "__main__":
    mcp.run()

That is a complete, functional MCP server. FastMCP infers the tool name from the function name, generates the description from the docstring, and builds the input schema from the type annotations. For a full tutorial on building servers with FastMCP, see Build an MCP Server in Python.

MCP Inspector

The MCP Inspector is an official developer tool for interactively testing and debugging MCP servers. It provides a web-based interface where you can connect to any MCP server, explore its capabilities, and manually invoke tools, resources, and prompts.

The Inspector is the primary debugging tool in the MCP ecosystem. When building a new server, you can use the Inspector to verify that your tools are registered correctly, that their schemas are accurate, that they return expected results, and that error handling works properly. It displays the raw JSON-RPC messages exchanged between client and server, making it easy to diagnose protocol-level issues.

To use the Inspector, you run it from the command line:

npx @modelcontextprotocol/inspector

This launches a local web interface where you can configure the server command, connect to it, and interact with all its capabilities. The Inspector supports both stdio and HTTP transports, and it displays comprehensive information about each tool, resource, and prompt the server exposes. For a walkthrough of using the Inspector in your development workflow, see Testing and Debugging MCP Servers.

MCP SDK

An MCP SDK (Software Development Kit) is a library that provides the building blocks for creating MCP clients and servers in a specific programming language. SDKs handle protocol implementation details so developers can focus on their tool logic rather than message serialization and transport management.

The MCP ecosystem has official and community-maintained SDKs for multiple languages:

LanguagePackageNotes
Pythonmcp on PyPIOfficial SDK, includes FastMCP
TypeScript@modelcontextprotocol/sdk on npmOfficial SDK, used by most Node.js servers
Java/Kotlinio.modelcontextprotocol:sdk on MavenOfficial SDK for JVM languages
C#ModelContextProtocol on NuGetOfficial SDK for .NET
Swiftmcp-swift-sdk via SPMOfficial SDK for Apple platforms
Gogithub.com/mark3labs/mcp-goCommunity SDK, widely adopted
Rustrust-mcp-sdkCommunity SDK, growing ecosystem

Each SDK provides abstractions for creating servers, defining tools with typed parameters, handling transport connections, and managing the protocol lifecycle. The TypeScript and Python SDKs are the most mature and widely used, and most MCP server examples and tutorials use one of these two. For language-specific guides, see Build an MCP Server in Python and Build an MCP Server in Node.js.


Additional Terms

Elicitation

Elicitation is an MCP capability introduced in the 2025-06-18 specification that allows servers to request additional information from the user during a tool execution. When a server needs clarification or input that was not provided in the original tool call, it can ask the host to prompt the user.

For example, a deployment tool might need the user to confirm which environment to deploy to, or a database tool might ask the user to choose between multiple ambiguous table matches. Elicitation provides a structured way for servers to request this input without breaking the tool execution flow. The host application controls how elicitation requests are presented to the user and can reject them if they violate security policies.

Notification

A notification is a one-way JSON-RPC message that does not expect a response. Both clients and servers can send notifications to inform the other side of events or state changes.

Common notifications in MCP include notifications/tools/list_changed (the server's available tools have changed), notifications/resources/updated (a resource's content has changed), and notifications/roots/list_changed (the client's roots have been updated). Notifications enable reactive behavior -- a client can re-fetch the tool list when it receives a change notification, or re-read a resource when it learns the content has been updated.

Content Types

Content types define the format of data returned by tools and resources in MCP. The protocol supports multiple content types to handle different kinds of information.

The standard content types are:

  • text -- Plain text content, the most common type used for tool results and textual data
  • image -- Base64-encoded image data with a MIME type, used for screenshots, charts, and visual content
  • audio -- Base64-encoded audio data, added in the 2025-06-18 specification for voice and sound content
  • resource -- An embedded reference to an MCP resource URI, allowing tools to return links to resources rather than inline data

Tools and resources can return multiple content items in a single response, mixing types as needed. For example, a screenshot tool might return both an image and a text description of what the image shows.

OAuth 2.1

OAuth 2.1 is the authentication standard used by remote MCP servers to verify client identity and authorize access. It was added to MCP in the 2025-03-26 specification revision as the required authentication mechanism for HTTP-based transports.

When connecting to a remote MCP server over Streamable HTTP, the client must authenticate using OAuth 2.1 flows. This ensures that only authorized users and applications can access the server's tools and resources. The host application typically manages the OAuth flow on behalf of the user, handling token acquisition, refresh, and secure storage. For security architecture details, see MCP Security Model.

Composability

Composability in MCP refers to the ability to chain multiple servers together, where one server acts as both a server (to a host) and a client (to other servers). This enables hierarchical and multi-agent architectures.

A composable MCP setup might have an orchestrator server that receives requests from a host application, then delegates subtasks to specialized servers -- one for code analysis, one for documentation lookup, one for testing. The orchestrator appears as a single server to the host but internally coordinates multiple servers to complete complex workflows. For architectural patterns, see Composability in MCP.


Quick Reference Table

TermCategoryOne-Line Definition
MCPCoreOpen standard for connecting AI apps to external tools and data
JSON-RPC 2.0CoreLightweight message format used for all MCP communication
Capability NegotiationCoreInitialization process where client and server exchange supported features
Specification VersionCoreDate-based version identifier for the MCP protocol revision
Host ApplicationArchitectureUser-facing AI app that contains MCP clients (e.g., Claude Desktop)
MCP ClientArchitectureProtocol component inside a host that connects to one server
MCP ServerArchitectureProgram that exposes tools, resources, and prompts via MCP
ToolPrimitiveModel-controlled function the AI can invoke to perform actions
ResourcePrimitiveApplication-controlled, read-only data source identified by URI
PromptPrimitiveUser-controlled template that generates structured messages
Tool AnnotationsPrimitiveMetadata hints describing tool behavior characteristics
Resource TemplatePrimitiveURI pattern with placeholders for dynamic resource resolution
TransportTransportCommunication layer carrying JSON-RPC messages between client and server
stdioTransportLocal transport using standard input/output of a child process
SSETransportRemote transport using Server-Sent Events (original spec)
Streamable HTTPTransportModern remote transport using HTTP with optional streaming
SamplingCapabilityServer requesting AI completions from the host's language model
RootsCapabilityClient informing the server of relevant filesystem locations
FastMCPDev ToolHigh-level Python framework for building MCP servers
MCP InspectorDev ToolWeb-based tool for testing and debugging MCP servers
MCP SDKDev ToolLanguage-specific library for building MCP clients and servers
ElicitationCapabilityServer requesting additional user input during tool execution
NotificationCoreOne-way JSON-RPC message that does not expect a response
Content TypesCoreFormats for data returned by tools and resources (text, image, audio)
OAuth 2.1SecurityAuthentication standard for remote MCP server access
ComposabilityArchitectureChaining servers together in hierarchical architectures

Further Reading

This glossary covers the foundational vocabulary of the Model Context Protocol. To go deeper into any of these concepts, explore the following guides:

Frequently Asked Questions

What is the MCP glossary?

The MCP glossary is a comprehensive reference of all terms and concepts used in the Model Context Protocol ecosystem. It covers core protocol components, transport mechanisms, development tools, and specification terminology.

What is the difference between an MCP tool and an MCP resource?

An MCP tool is a function the AI model can actively call to perform actions (like searching files or creating issues). An MCP resource is passive data the application reads to provide context (like database schemas or configuration files). Tools are model-controlled and can have side effects; resources are application-controlled and read-only.

What is an MCP host application?

An MCP host application is the user-facing AI application that contains one or more MCP clients. Examples include Claude Desktop, Cursor, VS Code with Copilot, and custom applications. The host manages security, user consent, and coordinates between multiple MCP client-server connections.

What does stdio mean in MCP?

stdio (standard input/output) is a local transport mechanism in MCP where the host application launches the MCP server as a child process. Communication happens through the process stdin (for sending messages) and stdout (for receiving messages). It is the simplest and most common transport for local servers.

What is MCP sampling?

Sampling is an MCP capability that allows servers to request AI completions from the host application language model. This enables agentic behaviors where the server can use AI reasoning as part of its operations, while the host maintains control over model access and user approval.

What are MCP tool annotations?

Tool annotations are metadata hints added to MCP tool definitions that describe the tool behavior. They include readOnlyHint (whether the tool only reads data), destructiveHint (whether it can delete or modify data), idempotentHint (whether calling it multiple times has the same effect), and openWorldHint (whether it interacts with external systems).

What is FastMCP?

FastMCP is a high-level Python framework for building MCP servers. It simplifies server development by providing decorators (@mcp.tool(), @mcp.resource(), @mcp.prompt()) that automatically handle JSON Schema generation, parameter validation, and protocol compliance. It is the recommended way to build MCP servers in Python.

What is the MCP Inspector?

The MCP Inspector is an official developer tool for testing and debugging MCP servers. It provides a web-based interface where you can connect to any MCP server, list its tools/resources/prompts, invoke them with custom parameters, and inspect the JSON-RPC messages. It is the primary debugging tool for MCP development.

Related Guides