No Authentication Model: MCP's Original Sin

Written by the Rafter Team

Model Context Protocol launched with a promise: let AI agents securely connect to external tools and data sources. Anthropic published an authorization spec. They recommended OAuth. They documented confused deputy attacks. They warned about token theft.
But here's the line that undermines everything: "Authorization is OPTIONAL for MCP implementations." (See the MCP Authorization specification.)
OAuth is recommended. Stdio uses environment variables. Both are optional. That's not a security model. It's a delegation of responsibility with no enforcement mechanism.
What this looks like in practice: The optional-auth model directly enabled CVE-2025-61260 — a CVSS 9.8 exploit where malicious MCP server definitions in Codex CLI's project config auto-executed without any authentication or approval step. When auth is optional, attackers skip it.
The Authentication Gap
Authentication determines who is making a request. Authorization determines what they can do. You can't have the second without the first.
The MCP authorization spec spends thousands of words explaining OAuth flows, PKCE challenges, and token refresh patterns. It mandates resource indicators. It requires HTTPS. But authentication itself? Optional.
The spec says implementations using HTTP transport "SHOULD conform" to the OAuth guidelines. Implementations using stdio "SHOULD NOT follow this specification" but instead "retrieve credentials from the environment." Alternative transports "MUST follow established security best practices."
Should. Should not. Must follow best practices. These aren't requirements—they're suggestions with escape clauses.
The result: no standard for who can call what. No mechanism to verify caller identity. No uniform way to scope access at the per-tool level. The protocol defines how to pass tokens but not how to validate them, how to structure them, or what claims they must contain.
HTTP Transport: OAuth Without Teeth
For HTTP-based MCP servers, the spec recommends OAuth 2.1. But recommendations without enforcement create illusion of security rather than security itself.
The OAuth flow exists. Authorization servers exist. The spec even provides sequence diagrams showing proper discovery, token exchange, and validation. But none of it addresses the fundamental question: what does this token actually authorize?
Compare MCP's approach to the OAuth 2.1 draft it claims to implement. OAuth 2.1 mandates PKCE for all clients, requires exact redirect URI matching, and eliminates the implicit grant entirely. MCP adopts the mechanism but makes the entire layer optional. RFC 6749 didn't make auth recommendations—it specified exact conformance requirements.
MCP takes the opposite approach. The authorization spec says HTTP implementations "SHOULD conform" to OAuth but never defines what conformance means. The spec recommends Authorization Code flow for end-user scenarios and Client Credentials for machine-to-machine—but "SHOULD support" leaves the door open to skip both. Should you use JWT tokens or opaque tokens? No guidance. Should you validate token signatures or call an introspection endpoint? Up to you.
OAuth scopes exist in the spec—but they're freeform strings with no standard structure. A server might use files:read. Another might use read-files. A third might use filesystem.read. There's no schema, no standard claims format, no way for a client to know what scopes mean without reading server-specific documentation.
This matters because scope interpretation determines what actions a token authorizes. If I have a token with scope database:read, can I read all tables or just specific ones? Can I read rows or just schema? Can I execute stored procedures that return data? Without standardization, every server answers differently.
The spec does mandate some OAuth features. Clients "MUST use the Authorization request header" for bearer tokens. Servers "MUST validate access tokens" and "MUST reject tokens that do not include them in the audience claim." But these requirements sit on top of an optional foundation. You must validate tokens—but only if you've chosen to implement authentication at all.
The spec's confused deputy section illustrates the problem perfectly. Consider an MCP proxy server that connects clients to third-party APIs:
- User authorizes legitimate client to access third-party API through MCP proxy
- Third-party authorization server sets consent cookie for the proxy's static client ID
- Attacker sends user malicious link with crafted authorization request
- Cookie still present, consent screen skipped
- Authorization code redirected to attacker's server
- Attacker exchanges code for access tokens
The mitigation? "MCP proxy servers MUST implement per-client consent and proper security controls." Not protocol-level enforcement—documentation telling you to build it yourself.
Token audience validation follows the same pattern. The spec says servers "MUST validate that access tokens were specifically issued for them" and clients "MUST implement the resource parameter" per RFC 8707. But there's no standard format for these claims, no validation framework, no tooling to verify compliance.
Consider what this means in practice. An MCP server receives a bearer token. According to the spec, it must validate that token. But how?
If the token is a JWT, the server could verify the signature and check the aud claim matches its own identifier. But what should that identifier be? The spec says to use "the canonical URI of the MCP server"—but is that https://mcp.example.com, https://mcp.example.com/, or https://mcp.example.com/mcp? The spec acknowledges this ambiguity and suggests using the form without trailing slash "for better interoperability" but doesn't require it.
If the token is opaque, the server needs to call an introspection endpoint. Which endpoint? The spec requires servers to implement OAuth 2.0 Protected Resource Metadata (RFC 9728), which includes an authorization_servers field—though it delegates authorization server selection to RFC 9728 without MCP-specific guidance on introspection failures.
The authorization spec describes what secure OAuth looks like. It doesn't make OAuth mandatory. It doesn't standardize claims. It doesn't provide enforcement mechanisms. It tells you to implement OAuth correctly without giving you the tools to verify you have.
Stdio Transport: The Environment Variable Problem
For local connections, MCP punts entirely. The spec says stdio implementations "SHOULD NOT" use the OAuth flow but "retrieve credentials from the environment."
What does that mean in practice?
No standard variable names. No scoping model. No audit trail. Just: pull secrets from somewhere in your environment and hope they're right.
Consider the implications:
- Every MCP server defines its own environment variable conventions
- Tools running in the same process share the same environment
- Credential rotation means updating scattered configuration files
- Audit logs can't distinguish between different callers using the same credentials
- No way to scope access per tool or per operation
This isn't hypothetical. Look at existing MCP servers in the wild. The Anthropic-maintained Git server expects repository paths in configuration but provides no authentication model. Community servers use variable names like API_KEY, BEARER_TOKEN, AUTH_TOKEN, ACCESS_TOKEN—whatever the author chose. Some read from environment variables, some from config files, some from command-line arguments.
A developer connecting multiple MCP servers needs to manage this chaos. GitHub server wants GITHUB_TOKEN. Slack server wants SLACK_BOT_TOKEN. Database server wants DATABASE_PASSWORD. Each credential goes in the environment, visible to every other process, with no scoping or isolation.
Want to rotate credentials? Update your shell profile, your IDE configuration, your CI/CD secrets, and any other place you've copied these variables. Miss one and the server breaks. Update the wrong one and a different server gets credentials meant for something else.
Want to audit who accessed what? You can't. The server knows a call came from a client, but not which client. If three developers share the same environment variables (because they're documented in the team README), the audit log shows "github read repository" but not who triggered it or why.
The stdio model works for prototypes. It fails for production. Optional authentication via environment variables is "security" in scare quotes—technically present but practically worthless.
The stdio transport is designed for simplicity: start a process, communicate over stdin/stdout, close the connection. But authentication via environment variables means any process with access to those variables can impersonate the legitimate caller.
The recent git server CVEs demonstrate what happens without authentication boundaries. CVE-2025-68143 allowed git_init to create repositories at arbitrary filesystem paths. CVE-2025-68144 enabled argument injection in git_diff and git_checkout, allowing local file overwrites. CVE-2025-68145 bypassed repository restrictions through missing path validation.
None of these vulnerabilities required bypassing authentication. They exploited the fact that once you're connected via stdio, you're trusted. No per-tool scoping. No per-operation validation. No way to restrict which repositories a caller can access until Anthropic patched specific argument validation bugs.
Multi-Tenant Isolation Failures
Optional authentication doesn't scale to multi-tenant deployments. When authentication is recommended rather than required, tenant isolation becomes each implementer's problem.
Imagine an MCP server hosting tools for multiple organizations. Without mandatory authentication:
- No protocol-level way to identify which tenant is making a request
- No standard mechanism to prevent tenant A from accessing tenant B's data
- No uniform audit trail showing which tenant performed which action
The spec provides no guidance on multi-tenant scenarios because it can't—isolation requires authentication, and authentication is optional.
Multi-tenancy isn't an edge case. It's the deployment model for any MCP server offered as a service. A database MCP server serving multiple customers. A CRM integration serving multiple companies. A filesystem server with different users' data.
Without standard authentication, every server implements isolation differently. Some use API keys in request headers. Some use JWT tokens with tenant claims. Some use connection-level authentication where each tenant gets a separate endpoint. Some skip isolation entirely and rely on network segmentation.
This fragmentation has consequences. A security audit of "MCP deployment" means auditing each server's custom authentication logic separately. Compliance requirements like SOC 2 or ISO 27001 require documenting authentication and authorization controls—but there's no standard to document against.
Credential compromise in this model is catastrophic. If an attacker obtains valid credentials for tenant A, and the server doesn't properly validate tenant context, they might access tenant B's data through the same MCP server. The protocol provides no defense because it doesn't require the authentication that makes tenant claims possible.
Token Passthrough: When Optional Auth Cascades
The security best practices doc explicitly forbids "token passthrough"—accepting a token from an MCP client and forwarding it to downstream APIs without validation.
The rationale is sound:
- Security controls get circumvented when tokens bypass the MCP server's validation logic
- Audit trails break when the downstream API sees requests from different identities than the MCP server
- Trust boundaries collapse when tokens work across services without audience validation
The document explains: "If the MCP server makes requests to upstream APIs, it may act as an OAuth client to them. The access token used at the upstream API is a separate token, issued by the upstream authorization server. The MCP server MUST NOT pass through the token it received from the MCP client."
But preventing token passthrough requires validating tokens. And validating tokens requires a standard format for token claims. And standard token formats require mandatory authentication.
The spec says MCP servers "MUST validate access tokens before processing the request" and "MUST reject tokens that do not include them in the audience claim." But if authentication is optional, what does token validation mean? If there's no standard claims format, what does audience validation look like?
Here's the dependency chain:
- Preventing token passthrough requires validating token audience
- Validating audience requires parsing token claims
- Parsing claims requires standard token format
- Standard formats require mandatory authentication to be meaningful
Break any link and the chain collapses. Make authentication optional and the entire security model becomes advisory.
The prohibition on token passthrough reveals the deeper problem: you can't secure something you haven't authenticated. You can document best practices, warn about confused deputy attacks, mandate audience validation—but without required authentication, these protections are suggestions that only work if implementers get everything right.
Defense: What MCP Actually Needs
The path forward requires three things:
Mandatory authentication profiles. Not recommendations. Not "SHOULD" language. Profiles that specify token formats, required claims, validation procedures, and scope structures. Different profiles for different trust models—maybe public endpoints use API keys, internal services use mutual TLS, multi-tenant systems use JWT with tenant claims.
Per-tool authorization scopes. Not freeform strings. A standard schema: server.tool.operation or similar. github.create_issue.write. slack.send_message.execute. database.customer_table.read. Scopes that actually describe what they authorize instead of relying on server-specific documentation.
Standard token claims. Audience claim identifying the MCP server. Subject claim identifying the caller. Scope claims using the standard format. Issued-at and expiration times. Claims that every token validator can parse without custom code for each server.
None of this is novel. OAuth 2.0 didn't make auth optional. JWT didn't make claims freeform. OpenID Connect didn't punt on token validation. They specified formats and required conformance.
MCP needs the same. Not recommendations. Requirements.
Rafter's Focus Area
This authentication gap is a core problem Rafter is working on. MCP servers shouldn't each reinvent authentication from scratch, parse tokens manually, or maintain custom allow/deny lists.
We're developing tooling focused on:
- Standardized token validation so individual MCP servers don't need custom auth code
- Per-tool scope enforcement with least-privilege defaults
- Audit trails that capture caller identity, tool invocations, and access decisions
Authentication should be infrastructure, not an exercise left to each server implementer.
Conclusion
"Authorization is optional" isn't a feature. It's a bet that every MCP server implementer will independently solve authentication correctly. That they'll standardize their own token formats. That they'll implement proper validation logic. That they'll never make mistakes with credential scoping or tenant isolation.
That bet has already failed. The git server CVEs prove it. The confused deputy attacks prove it. Every server rolling its own environment variable conventions proves it.
MCP needs a security model, not security recommendations. Until then, every MCP deployment is one misconfiguration away from compromise—and the protocol itself provides no defense.