MCP OAuth 2.1 Implementation Guide: Secure Authentication for Remote Servers
Step-by-step guide to implementing OAuth 2.1 authentication for remote MCP servers, covering PKCE, token handling, and auth middleware.
Implementing OAuth 2.1 for a remote MCP server requires configuring an authorization endpoint, supporting PKCE for all flows, issuing and validating access tokens, and handling refresh token rotation. OAuth 2.1 is the mandatory authentication standard for any MCP server exposed over HTTP, and this guide walks through every step from registering your server with an identity provider to building the auth middleware that protects your tools.
If you are deploying MCP servers beyond your local machine -- whether for a team, an organization, or a public audience -- OAuth 2.1 is not optional. The MCP security model specifies it as the required authentication mechanism for all HTTP-based transports, including SSE and the newer Streamable HTTP transport.
Why OAuth 2.1 for MCP
Local MCP servers communicate over stdio, inheriting the operating system user's permissions with no additional authentication needed. Remote servers are different. They are accessible over the network, potentially by multiple users, and they need a robust way to verify identity and authorize access.
OAuth 2.1 consolidates the best practices from OAuth 2.0 and eliminates insecure patterns. Here is what makes it the right choice for MCP:
| Feature | OAuth 2.0 | OAuth 2.1 |
|---|---|---|
| PKCE requirement | Optional | Mandatory for all flows |
| Implicit flow | Allowed | Removed entirely |
| Redirect URI matching | Loose | Exact match required |
| Refresh token rotation | Optional | Recommended/enforced |
| Bearer token in URL | Allowed | Prohibited |
The removal of the implicit flow and the mandatory PKCE requirement eliminate the two most exploited attack vectors in OAuth 2.0 deployments. For MCP servers that expose powerful tools -- file access, database queries, API calls -- these protections are essential.
The MCP OAuth Authorization Flow
The MCP specification defines a specific OAuth flow for remote servers. Here is the complete sequence:
Step 1: Client Discovery
When an MCP client connects to a remote server, it first requests the server's OAuth metadata. The server exposes this at a well-known endpoint:
GET /.well-known/oauth-authorization-server
The response includes the authorization endpoint, token endpoint, supported scopes, and other configuration:
{
"issuer": "https://mcp.example.com",
"authorization_endpoint": "https://mcp.example.com/authorize",
"token_endpoint": "https://mcp.example.com/token",
"registration_endpoint": "https://mcp.example.com/register",
"scopes_supported": ["mcp:tools", "mcp:resources", "mcp:prompts"],
"response_types_supported": ["code"],
"grant_types_supported": ["authorization_code", "refresh_token"],
"code_challenge_methods_supported": ["S256"],
"token_endpoint_auth_methods_supported": ["none"]
}
Step 2: Dynamic Client Registration
MCP supports dynamic client registration (RFC 7591), allowing clients to register themselves without manual setup:
POST /register
Content-Type: application/json
{
"client_name": "Claude Desktop",
"redirect_uris": ["http://localhost:8080/callback"],
"grant_types": ["authorization_code", "refresh_token"],
"response_types": ["code"],
"token_endpoint_auth_method": "none"
}
The server responds with a client_id and registration details. This is particularly important for MCP because new clients need to connect without prior arrangement with the server operator.
Step 3: Authorization with PKCE
The client generates a PKCE code verifier and challenge, then redirects the user to the authorization endpoint:
GET /authorize?
response_type=code&
client_id=abc123&
redirect_uri=http://localhost:8080/callback&
scope=mcp:tools mcp:resources&
state=random_state_value&
code_challenge=E9Melhoa2OwvFrEMTJguCHaoeK1t8URWbuGJSstw-cM&
code_challenge_method=S256
The PKCE flow works as follows:
- The client generates a random
code_verifier(43-128 characters) - The client computes
code_challenge = BASE64URL(SHA256(code_verifier)) - The challenge is sent with the authorization request
- The verifier is sent with the token exchange request
- The server verifies that SHA256(verifier) matches the original challenge
This prevents authorization code interception attacks, even if an attacker captures the authorization code.
Step 4: Token Exchange
After the user authorizes, the authorization server redirects back with an authorization code. The client exchanges it for tokens:
POST /token
Content-Type: application/x-www-form-urlencoded
grant_type=authorization_code&
code=AUTH_CODE_HERE&
redirect_uri=http://localhost:8080/callback&
client_id=abc123&
code_verifier=THE_ORIGINAL_VERIFIER_STRING
The response includes the access token and refresh token:
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"token_type": "Bearer",
"expires_in": 3600,
"refresh_token": "dGhpcyBpcyBhIHJlZnJlc2g...",
"scope": "mcp:tools mcp:resources"
}
Implementing the Auth Middleware
With the flow understood, here is how to build the server-side middleware that validates incoming requests and manages the OAuth endpoints.
Token Validation Middleware
Every MCP request to a remote server must include a valid Bearer token. Here is how to validate it in a Node.js/Express-based MCP server:
import jwt from "jsonwebtoken";
// Middleware to validate Bearer tokens on every MCP request
function validateMCPToken(req, res, next) {
const authHeader = req.headers.authorization;
if (!authHeader || !authHeader.startsWith("Bearer ")) {
return res.status(401).json({
error: "unauthorized",
error_description: "Bearer token required"
});
}
const token = authHeader.slice(7);
try {
const decoded = jwt.verify(token, PUBLIC_KEY, {
algorithms: ["RS256"],
issuer: "https://mcp.example.com",
audience: "mcp-server"
});
// Attach user info and scopes to the request
req.user = decoded;
req.scopes = decoded.scope ? decoded.scope.split(" ") : [];
next();
} catch (err) {
if (err.name === "TokenExpiredError") {
return res.status(401).json({
error: "token_expired",
error_description: "Access token has expired"
});
}
return res.status(401).json({
error: "invalid_token",
error_description: "Token validation failed"
});
}
}
Scope-Based Tool Authorization
MCP scopes control what capabilities a client can access. Implement scope checking at the tool execution level:
// Check if the authenticated client has the required scope
function requireScope(scope) {
return (req, res, next) => {
if (!req.scopes.includes(scope)) {
return res.status(403).json({
error: "insufficient_scope",
error_description: "Required scope: " + scope
});
}
next();
};
}
// Apply to MCP tool execution endpoint
app.post("/mcp/tools/call",
validateMCPToken,
requireScope("mcp:tools"),
handleToolCall
);
// Apply to MCP resource read endpoint
app.get("/mcp/resources/*",
validateMCPToken,
requireScope("mcp:resources"),
handleResourceRead
);
Defining Custom Scopes
While the MCP specification defines basic scopes, you can create fine-grained scopes for your server:
| Scope | Access Level |
|---|---|
| mcp:tools | Execute any tool on the server |
| mcp:tools:read | Execute read-only tools only |
| mcp:tools:write | Execute tools that modify state |
| mcp:resources | Read resources and resource templates |
| mcp:prompts | Access prompt templates |
| mcp:admin | Server administration capabilities |
Refresh Token Handling
Access tokens are short-lived (typically 1 hour). MCP clients need to refresh them without interrupting the user:
// Token refresh endpoint
app.post("/token", async (req, res) => {
const grantType = req.body.grant_type;
if (grantType === "refresh_token") {
const refreshToken = req.body.refresh_token;
const clientId = req.body.client_id;
// Validate the refresh token
const stored = await tokenStore.findRefreshToken(refreshToken);
if (!stored || stored.clientId !== clientId) {
return res.status(400).json({
error: "invalid_grant",
error_description: "Invalid refresh token"
});
}
// Rotate the refresh token (OAuth 2.1 best practice)
await tokenStore.revokeRefreshToken(refreshToken);
const newAccessToken = generateAccessToken(stored.userId, stored.scopes);
const newRefreshToken = generateRefreshToken();
await tokenStore.storeRefreshToken(newRefreshToken, {
userId: stored.userId,
clientId: clientId,
scopes: stored.scopes
});
return res.json({
access_token: newAccessToken,
token_type: "Bearer",
expires_in: 3600,
refresh_token: newRefreshToken,
scope: stored.scopes.join(" ")
});
}
// Handle authorization_code grant...
});
Refresh token rotation is a critical security practice. Every time a refresh token is used, it is invalidated and a new one is issued. If an attacker steals a refresh token and the legitimate client uses it first, the attacker's stolen token becomes invalid.
Registering with Identity Providers
Instead of building your own authorization server, you can delegate authentication to an established identity provider. This is recommended for production deployments.
Using Auth0
Configure your MCP server to validate tokens issued by Auth0:
import { auth } from "express-oauth2-jwt-bearer";
const validateAuth0Token = auth({
issuerBaseURL: "https://your-tenant.auth0.com/",
audience: "https://mcp.example.com/api"
});
app.post("/mcp/tools/call", validateAuth0Token, handleToolCall);
In the Auth0 dashboard, register your MCP server as an API and configure the allowed scopes, token expiration, and client applications.
Using Keycloak
For self-hosted identity management, Keycloak is a strong option:
import Keycloak from "keycloak-connect";
const keycloak = new Keycloak({}, {
realm: "mcp",
"auth-server-url": "https://keycloak.example.com/auth",
"ssl-required": "all",
resource: "mcp-server",
"bearer-only": true
});
app.post("/mcp/tools/call",
keycloak.protect("realm:mcp-tools"),
handleToolCall
);
Provider Comparison
| Provider | Best For | Self-Hosted | MCP Scopes |
|---|---|---|---|
| Auth0 | SaaS teams, quick setup | No | Custom API scopes |
| Keycloak | Enterprise, full control | Yes | Realm roles and scopes |
| Okta | Enterprise SSO | No | Custom authorization servers |
| Azure AD | Microsoft ecosystem | No | App roles and scopes |
| Google IAM | GCP deployments | No | OAuth scopes |
PKCE Implementation Details
PKCE (Proof Key for Code Exchange, pronounced "pixy") is mandatory in OAuth 2.1. Here is a complete implementation of the server-side verification:
import crypto from "crypto";
// Server-side: verify the PKCE code challenge during token exchange
function verifyPKCE(codeVerifier, storedCodeChallenge, method) {
if (method !== "S256") {
// OAuth 2.1 requires S256; reject plain method
return false;
}
const computed = crypto
.createHash("sha256")
.update(codeVerifier)
.digest("base64url");
return computed === storedCodeChallenge;
}
// In the token endpoint
app.post("/token", async (req, res) => {
if (req.body.grant_type === "authorization_code") {
const authCode = await codeStore.find(req.body.code);
if (!authCode) {
return res.status(400).json({ error: "invalid_grant" });
}
// Verify PKCE
if (!verifyPKCE(
req.body.code_verifier,
authCode.codeChallenge,
authCode.codeChallengeMethod
)) {
return res.status(400).json({
error: "invalid_grant",
error_description: "PKCE verification failed"
});
}
// Issue tokens...
}
});
The S256 method (SHA-256) is the only acceptable code challenge method in OAuth 2.1. The plain method, which sent the verifier in cleartext, is explicitly prohibited.
Security Checklist for MCP OAuth
Before deploying your OAuth-protected MCP server, verify every item:
| Check | Why It Matters |
|---|---|
| TLS (HTTPS) on all endpoints | Prevents token interception in transit |
| PKCE required for all authorization requests | Blocks authorization code interception |
| Exact redirect URI matching | Prevents open redirect attacks |
| Refresh token rotation enabled | Limits window for stolen token reuse |
| Access token expiry under 1 hour | Reduces impact of token theft |
| No tokens in URL query parameters | Prevents leakage via referrer headers and logs |
| Rate limiting on token endpoint | Blocks brute-force token guessing |
| CORS restricted to known origins | Prevents unauthorized browser-based access |
| Token revocation endpoint available | Enables immediate access termination |
| Audit logging for all auth events | Provides forensic trail for incidents |
Common Implementation Pitfalls
These mistakes appear frequently in MCP OAuth implementations:
Storing tokens in localStorage. Browser-based MCP clients should never store tokens in localStorage, which is accessible to any JavaScript on the page. Use secure, HTTP-only cookies or in-memory storage instead.
Skipping state parameter validation. The state parameter in the authorization request prevents CSRF attacks. Always generate a cryptographically random state, store it before redirecting, and verify it matches when the callback arrives.
Not validating the token audience. An access token issued for one MCP server should not be accepted by another. Always check the aud (audience) claim matches your server's identifier.
Using symmetric signing keys. For multi-server deployments, use asymmetric keys (RS256 or ES256) so that servers can validate tokens without having access to the signing key. Only the authorization server holds the private key.
Ignoring token scope on the server side. Even if the authorization server issued a token with broad scopes, your MCP server should check that the requested operation is within the token's scope before executing.
What to Read Next
- MCP Security Model: Authentication, Permissions and Best Practices -- the parent guide covering the full MCP security architecture
- Securing Filesystem MCP Servers -- hardening file access servers against path traversal and data leakage
- Deploying Remote MCP Servers -- infrastructure and deployment patterns for production MCP servers
- MCP Architecture Explained -- understanding the protocol layers that OAuth integrates with
- Browse MCP Servers -- explore the directory for servers that implement OAuth authentication