Why MCP's 'SHOULD' Language is a Security Failure

Written by the Rafter Team

The Model Context Protocol specification contains a single word that undermines the entire security posture of the ecosystem: "SHOULD."
When Anthropic writes that "implementors SHOULD build security controls," they're using RFC 2119 terminology that means "recommended but optional." For a protocol designed to give AI agents access to sensitive systems, optional security isn't just inadequate—it's a fundamental design failure that guarantees fragmentation, inconsistent protection, and exploitable gaps.
The Language of Protocol Security
RFC 2119 defines precise meanings for requirement levels in protocol specifications:
- MUST / REQUIRED: Absolute requirement for conformance
- SHOULD / RECOMMENDED: May exist valid reasons to ignore in particular circumstances
- MAY / OPTIONAL: Truly optional, implementor discretion
These aren't stylistic choices. They define the contract between protocol designers and implementors. When a security control is marked SHOULD, the spec is explicitly saying: "This matters, but you can skip it if you want."
The MCP specification uses SHOULD for critical security controls:
- Human-in-the-loop approval for sensitive operations
- Authentication mechanisms
- Rate limiting and abuse prevention
- Audit logging
The result is predictable: most implementors skip them. Security becomes an exercise left to the reader.
What Happens When Security is Optional
Optional security guarantees exactly one outcome: inconsistent implementation. When given the choice between building security controls and shipping faster, most teams ship faster. This isn't laziness—it's rational prioritization when the protocol itself signals that security is negotiable.
Consider what happens across the MCP ecosystem:
Scenario 1: The MVP Implementation A developer building an MCP server for internal tools needs to ship quickly. The spec says authentication SHOULD be implemented, but that sounds like Phase 2 work. They ship without it. The server runs on localhost, so it feels safe. Six months later, someone adds remote access. The authentication layer never gets built.
Scenario 2: The "Smart" Shortcut Another team reads the human-in-the-loop recommendations and decides their AI agent is "smart enough" to make safe decisions. They implement automatic approval for a subset of operations that seem low-risk. They're technically compliant—SHOULD isn't MUST. But their risk assessment was wrong. The agent escalates privileges through a chain of "safe" operations.
Scenario 3: The Fragmentation Problem Ten different MCP server implementations exist for filesystem access. Each interprets the security SHOULDs differently:
- Server A: No authentication, assumes trusted network
- Server B: API key auth, no human approval
- Server C: Human approval for writes, not reads
- Server D: Full human approval, but allows batch operations
- Server E: No security controls, explicitly documented as "development only" (but used in production)
A client connecting to these servers can't know what security model applies. The protocol provides no way to negotiate or verify security controls. Each server is "compliant" with the spec.
How Real Protocols Enforce Security
Successful security protocols don't make protection optional. They enforce it at the protocol level, making insecure implementations non-compliant by design.
| Protocol | Security Model | Enforcement |
|---|---|---|
| TLS | Encryption MUST be negotiated before data transfer | Handshake fails without crypto agreement. Cleartext communication is not TLS. |
| OAuth 2.0 | Authorization MUST be obtained before resource access | Token validation is required. No token = 401 response. Not optional. |
| SSH | Authentication MUST succeed before shell access | Connection closes without valid credentials. No backdoor to "try without auth." |
| MCP | Security SHOULD be implemented based on context | No enforcement mechanism. Implementor decides. Client has no way to verify. |
The pattern is clear: protocols that take security seriously make it mandatory and verifiable. Protocols that make security optional get insecure implementations.
TLS doesn't say "implementors SHOULD use encryption." It says: here's the handshake, here's the cipher negotiation, here's what happens if it fails. You can't speak TLS without encryption—the protocol prevents it.
OAuth doesn't say "resource servers SHOULD validate tokens." It defines the validation flow and the error responses for invalid tokens. A server that doesn't validate isn't implementing OAuth.
MCP says "implementors SHOULD consider security based on their context" and provides no mechanism to enforce, verify, or even detect whether security controls exist. The MCP security best practices page offers guidance but no enforcement.
The Human-in-the-Loop Illusion
MCP's most prominent security recommendation is human-in-the-loop approval for sensitive operations. The spec describes this as a key protection against agent misbehavior. But it's phrased as a SHOULD, not a MUST, and defined loosely enough that nearly any implementation can claim compliance.
What counts as human-in-the-loop?
- Explicit per-operation approval prompts?
- Batch approval for a series of operations?
- Approval of high-level intent, with agent autonomy for implementation details?
- Notification with 30-second timeout before auto-approval?
The spec doesn't define requirements. The result: every implementor builds something different, calls it "human-in-the-loop," and moves on.
Worse, human-in-the-loop approval is an interface problem, not a protocol problem. MCP servers have no standard way to request approval, no standard format for describing operations to users, and no standard response vocabulary. Some servers implement approval as a tool call. Others implement it as a custom extension. Many skip it entirely.
When security controls live entirely in implementation-specific code with no protocol support, they're decorative. They exist where convenient and vanish where costly.
The Fragmentation Attack Surface
Optional security creates a fragmentation problem that becomes an attack surface. When every MCP server implements authentication differently (or not at all), attackers can:
1. Shop for the weakest link: If an organization runs five different MCP servers, the attacker needs to compromise only the one without authentication. They all provide useful capabilities. Only one needs to be vulnerable.
2. Exploit inconsistent mental models: Developers using authenticated servers may assume all MCP servers require authentication. They configure firewall rules and network policies based on that assumption. Then someone deploys an unauthenticated server, and the protections don't apply.
3. Leverage partial implementations: A server might implement authentication but not authorization, or authorization but not audit logging. Attackers chain partial protections to achieve full compromise. Each server is "secure" in isolation. Together they're exploitable.
4. Abuse batch operations: Servers with human-in-the-loop approval often provide batch operation modes for convenience. An attacker escalates privileges by making requests that look like legitimate batch workflows but contain embedded malicious operations.
This isn't hypothetical. It's the well-documented pattern of how fragmented security models fail in practice. OAuth 1.0 had similar problems—optional security mechanisms led to incompatible implementations and widespread vulnerabilities. OAuth 2.0 learned from that failure and made security mandatory.
MCP is repeating OAuth 1.0's mistakes.
The "Context-Dependent" Defense
Defenders of MCP's SHOULD language often argue that security requirements are context-dependent. A server used for development has different needs than one used in production. Local-only servers don't need the same protections as internet-facing ones.
This argument confuses deployment context with protocol design. Yes, security requirements vary by deployment. But the protocol should enforce baseline security that applies everywhere and provide standard mechanisms for deployments that need more.
TLS works this way. Every TLS connection requires encryption. If you're in a trusted environment and don't need encryption, you don't use TLS—you use plaintext HTTP. The protocol doesn't provide a "TLS lite" mode for trusted networks. It enforces its security model universally.
MCP could work this way. The protocol could require authentication and authorization for all connections. Deployments that don't need full security could use a different protocol. But MCP tries to be everything to everyone, making all security optional and providing no standard way to communicate security requirements between clients and servers.
The result: clients can't trust servers, servers can't trust clients, and everyone builds custom solutions that don't interoperate.
Treating MCP as Untrusted
Given MCP's optional security model, the only safe approach is to treat all MCP servers as untrusted and add external enforcement layers.
Network segmentation: Run MCP servers in isolated network segments with strict firewall rules. Don't rely on server-level authentication—assume it doesn't exist.
Capability-based sandboxing: Wrap MCP servers in sandboxes that enforce capability restrictions at the OS level. If a server claims to restrict filesystem access, enforce it with chroot or mandatory access controls.
Audit everything: Log all MCP traffic at the network level, not just at the application level. Don't trust servers to implement audit logging correctly.
Defense in depth: Layer multiple security controls so that compromise of any single MCP server doesn't grant broad access. Assume attackers will find the unprotected server.
Vendor assessment: Before deploying any MCP server, audit its actual security implementation. Don't trust claims of compliance with SHOULD requirements. Test authentication bypasses, authorization boundaries, and approval flows.
This is expensive, operationally complex, and largely negates the value proposition of a standard protocol. If every deployment needs custom security wrappers, why use a standard protocol at all?
Protocol-Level Enforcement: The Alternative
MCP could enforce security at the protocol level, making insecure implementations non-compliant by design.
Required authentication handshake: Every MCP connection starts with authentication negotiation. No authentication method agreed? Connection closes. No way to skip it.
Capability declarations: Servers declare supported capabilities during handshake. Clients declare required security controls. If server doesn't support required controls, connection closes. No ambiguity.
Standardized approval flows: Human-in-the-loop approval is a first-class protocol feature with defined request/response formats. Servers that don't support approval can't claim compliance.
Mandatory audit events: Protocol defines standard audit event formats. Servers must emit them. Clients can subscribe to audit streams. Non-compliance is detectable.
Security policy negotiation: Clients and servers negotiate security policies at connection time. Policy mismatches cause connection failure. No silent downgrades.
This is how Rafter approaches the problem. We treat MCP servers as untrusted components and enforce security at the orchestration layer:
- Authentication required: Every server connection requires authentication. No exceptions. Servers without authentication support can't be deployed.
- Operation approval: Sensitive operations route through approval workflows that can't be bypassed at the server level. Servers don't decide what's sensitive—the orchestrator does.
- Capability enforcement: Servers declare capabilities. Orchestrator enforces least-privilege access based on capabilities and request context.
- Audit immutability: All MCP operations generate audit events at the orchestrator level. Servers can't suppress or modify audit logs.
This doesn't require changes to MCP servers. It treats them as untrusted components and enforces security externally. But it highlights what MCP should have been: a protocol with mandatory, verifiable security built in.
Conclusion
"SHOULD" is not a security model. It's an abdication of responsibility.
When protocol designers make security optional, they guarantee that implementations will be insecure. Fragmentation becomes inevitable. Attackers exploit the weakest implementation. Organizations that need security build custom solutions that don't interoperate.
MCP had the opportunity to learn from decades of protocol security failures. OAuth 1.0 tried optional security and failed. TLS makes encryption mandatory. SSH makes authentication mandatory. Every successful security protocol enforces protection at the protocol level.
MCP chose SHOULD instead of MUST, and the ecosystem is paying the price in fragmented implementations, inconsistent security models, and exploitable gaps.
The only viable response is defense in depth: treat MCP as untrusted, enforce security externally, and hope the next version of the protocol learns from these mistakes.
Security can't be optional. Protocols that make it optional aren't secure protocols—they're insecure protocols with documentation about how they could have been secure.
MCP is the latter.