Architecture¶
How the redisctl MCP server is structured internally.
Overview¶
redisctl-mcp is a standalone binary built on tower-mcp, a Rust MCP framework. It exposes Redis management operations as MCP tools that AI assistants can discover and invoke.
AI Assistant (Claude, Cursor, etc.)
|
| MCP protocol (stdio or HTTP/SSE)
v
redisctl-mcp
|
+-- Policy engine (tier checks, allow/deny lists)
+-- Audit layer (structured logging of tool calls)
+-- Tool router (340 tools across 4 toolsets)
| |
| +-- Cloud tools -> redis-cloud client -> Cloud REST API
| +-- Enterprise tools -> redis-enterprise client -> Enterprise REST API
| +-- Database tools -> redis crate -> Redis protocol
| +-- App tools -> redisctl-core -> local config
|
+-- Credential resolution (profiles or OAuth)
Feature Flags¶
The binary is compiled with feature flags that gate platform-specific dependencies:
| Feature | Default | Gates |
|---|---|---|
cloud |
yes | redis-cloud crate + cloud toolset |
enterprise |
yes | redis-enterprise crate + enterprise toolset |
database |
yes | redis crate + database toolset |
http |
yes | HTTP/SSE transport with optional OAuth |
Profile/app tools and the two system tools are always compiled in (they only depend on redisctl-core).
Compile with --no-default-features and selectively enable what you need:
# Cloud-only binary
cargo install redisctl-mcp --no-default-features --features cloud
# Database-only binary
cargo install redisctl-mcp --no-default-features --features database
Toolset and Sub-Module System¶
Tools are organized into toolsets (cloud, enterprise, database, app) and sub-modules within each toolset. Each sub-module is a Rust module that declares:
- A
TOOL_NAMESconstant listing all tool names in the module - A
router()function that registers tools with the MCP router
The mcp_module! macro generates both from a single declaration:
This produces a TOOL_NAMES array and a router() function that builds tools and merges them into the MCP router.
Tool Macros¶
Three platform-specific macros eliminate per-tool boilerplate:
database_tool!-- resolves a Redis connection from URL or profile, then runs the handlercloud_tool!-- resolves a Cloud API client from profile, then runs the handlerenterprise_tool!-- resolves an Enterprise API client from profile, then runs the handler
Each macro accepts a safety tier (read_only, write, or destructive) that sets the tool's MCP annotations and generates runtime permission guards:
database_tool!(read_only, ping, "redis_ping",
"Test connectivity by sending a PING command",
{} => |conn, _input| {
let response: String = redis::cmd("PING")
.query_async(&mut conn).await.tool_context("PING failed")?;
Ok(CallToolResult::text(format!("Response: {}", response)))
}
);
Transport Modes¶
stdio (default)¶
Standard input/output. The AI assistant spawns the redisctl-mcp process and communicates over stdin/stdout. This is the standard MCP transport used by Claude Desktop, Cursor, Claude Code, and others.
HTTP/SSE¶
HTTP transport with Server-Sent Events for streaming. Useful for shared deployments where multiple clients connect to a single MCP server instance.
Optional OAuth authentication protects the HTTP endpoint:
redisctl-mcp --transport http --oauth \
--oauth-issuer https://auth.example.com \
--oauth-audience my-audience \
--jwks-uri https://auth.example.com/.well-known/jwks.json
Credential Resolution¶
Credentials are resolved at tool invocation time, not at server startup. This allows multi-profile configurations where different tools target different clusters.
Profile-based (stdio mode)¶
- Tool call includes optional
profileparameter - If no profile specified, use the first
--profilefrom CLI args - If no CLI profile, fall back to the default profile from
~/.config/redisctl/config.toml - Resolve credentials from the profile (including keyring lookups)
- Build or reuse a cached API client for that profile
OAuth-based (HTTP mode)¶
In HTTP mode with OAuth enabled, credentials come from environment variables (REDIS_CLOUD_API_KEY, REDIS_ENTERPRISE_URL, etc.) rather than profiles.
Policy Engine¶
The policy engine evaluates every tool call against the active policy before execution. See Configuration for the full policy reference.
The evaluation flow:
- Registration-time filtering -- tools outside the active tier are hidden from the AI (not registered with the router)
- Runtime guards -- write and destructive tools check the policy tier again inside the handler as defense-in-depth
- Allow/deny lists -- evaluated per the policy evaluation order
Audit Layer¶
A tower middleware (AuditLayer) sits between the transport and the tool router. It intercepts every tool call and emits structured events via the tracing crate with target: "audit". See Audit Logging for configuration details.
Concurrency and Rate Limiting¶
The server enforces concurrency and rate limits via CLI flags:
--max-concurrent 10-- maximum parallel tool calls (default: 10)--rate-limit-ms 100-- minimum interval between calls in milliseconds (default: 100)--request-timeout-secs 30-- per-request timeout for HTTP transport (default: 30)
Request Flow¶
A typical tool call flows through these layers:
- Transport -- receives MCP JSON-RPC message (stdio or HTTP)
- Audit layer -- records the call, starts timer
- Capability filter -- checks if the tool is visible under the current preset
- Router -- dispatches to the tool handler
- Tool handler -- runs the platform macro which:
- Checks safety tier (write/destructive guard)
- Resolves credentials and builds/reuses API client
- Executes the operation
- Returns
CallToolResult
- Audit layer -- records result status and duration
- Transport -- sends MCP JSON-RPC response