How It Works

MCPGate sits in the request path between your AI client and every third-party service it wants to use. This page explains what happens at each stage: how credentials are protected, how tool calls are evaluated, and how the MCP transport layer works.

Architecture Overview#

Every tool call follows this path:

AI ClientClaude Desktop, Cursor, ChatGPT…
↓ MCP Streamable HTTP
MCPGateAuth → Guardrail evaluation → Tool dispatch
↓ Native API calls (OAuth, REST, gRPC)
Third-party APIsGmail, Slack, GitHub, Notion…

MCPGate handles authentication with both sides: it validates the API key from your AI client, retrieves your encrypted service credentials from the vault, makes the API call on your behalf, and returns the result. The LLM never sees a credential at any point in this flow.

Credential Isolation#

When you connect a service through OAuth or paste an API key, MCPGate encrypts that credential using AES-256-GCM with a per-user data encryption key (DEK). The DEK itself is encrypted by a key encryption key (KEK) managed in a hardware security module. At rest, nothing is stored in plaintext.

When a tool call arrives, MCPGate decrypts the relevant credential in memory, uses it to make the API call, then discards it. The credential:

  • Never appears in tool call arguments
  • Never appears in tool call results
  • Never enters the LLM's context window
  • Never appears in activity logs

Why this matters

Storing API keys in claude_desktop_config.json means every tool call result — and every system prompt — could potentially expose that key to the LLM. Prompt injection attacks can trick the model into leaking values from its context. MCPGate removes the credential from the equation entirely: even a successful prompt injection finds nothing to steal.

Evaluation Chain#

When a tool call arrives at MCPGate, it passes through four evaluation layers before execution. The first deny at any layer short-circuits the chain — the call is rejected immediately and logged.

1

Admin Global Rules

Site-wide tool enable/disable configured by the MCPGate administrator. Blocks tools that are disabled for all users — for example, during a connector outage or a security incident.

2

User Tool Preferences

Per-user connector tool toggles set on the Connectors page. You can disable individual tools you never want any of your apps to call, regardless of their guardrail policy.

3

App Tool Access

Which tools this specific MCP App is allowed to call. An app configured for read-only access will be denied any write-capable tools here, even if your user preferences allow them.

4

App Guardrail Rules

Deterministic template-based rules evaluated against the tool arguments: keyword blockers, PII detection, recipient allowlists, rate limits, and more. Up to 32 rules per app.

Tip

Rules are deterministic — no AI is involved in the decision path. The same tool call with the same arguments will always produce the same allow or deny decision. This makes guardrails auditable, testable, and impossible to jailbreak through clever prompting.

MCP Transport#

MCPGate implements the Model Context Protocol using Streamable HTTP transport — the current MCP standard for remote servers. This means no local processes, no npm packages, and no SSH tunnels. Your AI client connects over HTTPS to a single stable URL.

Every MCP App gets its own endpoint:

text
https://api.mcpgate.sh/mcp/YOUR_API_KEY

This URL is compatible with Claude Desktop, Claude Code, Cursor, and any other client that supports MCP Streamable HTTP. Older clients that only support stdio transport can use a thin local adapter — see the Other Clients guide.