Introduction
redisctl is a command-line tool for managing Redis Cloud and Redis Enterprise deployments.
Features
redisctl provides:
- Type-Safe API Clients - Catch errors at compile time
- Async Operation Handling - Automatic polling with
--wait - Support Package Automation - 10+ minutes → 30 seconds
- Profile Management - Secure credential storage
- Structured Output - JSON, YAML, or Table with JMESPath
- Library-First Architecture - Reusable components
Quick Example
# Configure once
redisctl profile set prod --api-key $KEY --api-secret $SECRET
# Create subscription and wait
redisctl cloud subscription create \
--name prod \
--cloud-provider AWS \
--region us-east-1 \
--wait
# Everything just works
redisctl cloud database create --subscription $SUB --name mydb --wait
Installation
# Docker (quick start)
docker run ghcr.io/redis-developer/redisctl:latest --help
# macOS/Linux
brew install redis-developer/homebrew-tap/redisctl
# Or download from GitHub releases
See Installation for all methods, or try the Quick Start with Docker.
Architecture
redisctl is built as reusable libraries:
redisctl-config- Profile and credential managementredis-cloud- Cloud API client (21 handlers, 95%+ coverage)redis-enterprise- Enterprise API client (29 handlers, 100% coverage)redisctl- CLI binary (thin orchestration layer)
This enables Terraform providers, backup tools, monitoring dashboards, and more.
Next Steps
Need Help?
Presentation
This section contains presentation materials for redisctl.
Slide Deck
View the redisctl presentation slides
Use arrow keys to navigate, or press ? for keyboard shortcuts.
Topics Covered
- The problem: No CLI tooling for Redis Cloud/Enterprise
- The solution: redisctl's four-layer architecture
- Demo: Profile setup, Cloud operations, Enterprise operations
- Raw API access for any endpoint
- Support package automation
- Output formats and JMESPath queries
- Async operation handling
- Library-first architecture for ecosystem tools
- Use cases by persona
- Installation and getting started
Presenting
The slides use reveal.js and can be:
- Navigated with arrow keys
- Viewed in overview mode with
Esc - Printed to PDF via
?print-pdfquery parameter - Shared via URL hash for specific slides
Quick Start
Get running in 60 seconds with Docker.
Try It Now
Redis Cloud
# Set your credentials
export REDIS_CLOUD_API_KEY="your-api-key"
export REDIS_CLOUD_SECRET_KEY="your-secret-key"
# Run a command
docker run --rm \
-e REDIS_CLOUD_API_KEY \
-e REDIS_CLOUD_SECRET_KEY \
ghcr.io/redis-developer/redisctl cloud subscription list
Redis Enterprise
# Set your credentials
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # for self-signed certs
# Run a command
docker run --rm \
-e REDIS_ENTERPRISE_URL \
-e REDIS_ENTERPRISE_USER \
-e REDIS_ENTERPRISE_PASSWORD \
-e REDIS_ENTERPRISE_INSECURE \
ghcr.io/redis-developer/redisctl enterprise cluster get
That's it! You just ran your first redisctl command.
Next Steps
Choose your path:
- New to redisctl? - Start with the Walkthrough to understand the 3-tier model
- Redis Cloud Users - Jump to Cloud Overview for Cloud-specific commands
- Redis Enterprise Users - Jump to Enterprise Overview for Enterprise-specific commands
- Ready to install? - See Installation for Homebrew, binaries, and more
- Developers - Check out Libraries for using redisctl as a Rust library
Installation
Docker
The quickest way to try redisctl with no installation:
# Run commands directly
docker run --rm ghcr.io/redis-developer/redisctl --help
# With environment variables
docker run --rm \
-e REDIS_CLOUD_API_KEY="your-key" \
-e REDIS_CLOUD_SECRET_KEY="your-secret" \
ghcr.io/redis-developer/redisctl cloud database list
Homebrew (macOS/Linux)
The easiest way to install on macOS or Linux:
# Install directly (automatically taps the repository)
brew install redis-developer/homebrew-tap/redisctl
# Or tap first, then install
brew tap redis-developer/homebrew-tap
brew install redisctl
This will:
- Install the latest stable version
- Set up the binary in your PATH
- Enable automatic updates via
brew upgrade
To upgrade to the latest version:
brew upgrade redisctl
Binary Releases
Download the latest release for your platform from the GitHub releases page.
Linux/macOS
# Download the binary (replace VERSION and PLATFORM)
curl -L https://github.com/redis-developer/redisctl/releases/download/vVERSION/redisctl-PLATFORM.tar.gz | tar xz
# Move to PATH
sudo mv redisctl /usr/local/bin/
# Make executable
chmod +x /usr/local/bin/redisctl
Windows
Download the .zip file from the releases page and extract to a directory in your PATH.
From Cargo
If you have Rust installed:
# Basic installation
cargo install redisctl
# With secure credential storage support (recommended)
cargo install redisctl --features secure-storage
Feature Flags
| Feature | Description |
|---|---|
secure-storage | Enables OS keyring support for secure credential storage (recommended) |
cloud-only | Builds only Cloud functionality (smaller binary) |
enterprise-only | Builds only Enterprise functionality (smaller binary) |
From Source
git clone https://github.com/redis-developer/redisctl.git
cd redisctl
# Basic installation
cargo install --path crates/redisctl
# With secure storage support (recommended)
cargo install --path crates/redisctl --features secure-storage
# Development build with all features
cargo build --release --all-features
Shell Completions
redisctl can generate shell completions for better command-line experience.
Bash
# Generate completion
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Or system-wide (requires sudo)
redisctl completions bash | sudo tee /usr/share/bash-completion/completions/redisctl
# Reload your shell or source the completion
source ~/.local/share/bash-completion/completions/redisctl
Zsh
# Add to your fpath (usually in ~/.zshrc)
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Or use oh-my-zsh custom completions
redisctl completions zsh > ~/.oh-my-zsh/custom/completions/_redisctl
# Reload shell
exec zsh
Fish
# Generate completion
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically
PowerShell
# Generate completion
redisctl completions powershell | Out-String | Invoke-Expression
# To make permanent, add to your PowerShell profile
redisctl completions powershell >> $PROFILE
Elvish
# Generate completion
redisctl completions elvish > ~/.config/elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.config/elvish/rc.elv
Verify Installation
redisctl --version
Platform-Specific Binaries
For specific deployment scenarios, you can build platform-specific binaries:
# Cloud-only binary (smaller size)
cargo build --release --features cloud-only --bin redis-cloud
# Enterprise-only binary
cargo build --release --features enterprise-only --bin redis-enterprise
Next Steps
- Configuration - Set up your credentials
- Quick Start - Run your first commands
Profiles & Authentication
Profiles store connection details and credentials for your Redis deployments. You can have multiple profiles for different environments (dev, staging, production).
Quick Setup
Redis Cloud
# Using environment variables (simplest)
export REDIS_CLOUD_API_KEY="your-api-key"
export REDIS_CLOUD_SECRET_KEY="your-secret-key"
# Or create a profile
redisctl profile set cloud-prod \
--deployment-type cloud \
--api-key "your-api-key" \
--api-secret "your-secret-key"
Redis Enterprise
# Using environment variables
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # for self-signed certs
# Or create a profile
redisctl profile set enterprise-prod \
--deployment-type enterprise \
--url "https://cluster.example.com:9443" \
--username "admin@cluster.local" \
--password "your-password" \
--insecure
Getting Credentials
Redis Cloud API Keys
- Log in to app.redislabs.com
- Click your name → Account Settings → API Keys
- Click "Add API Key" and give it a name
- Copy both the Account key and Secret (you won't see the secret again!)
Redis Enterprise Credentials
- URL:
https://cluster-fqdn:9443 - Username: Configured during setup (often
admin@cluster.local) - Password: Set during cluster bootstrap
Profile Management
# List all profiles
redisctl profile list
# Show profile details
redisctl profile get cloud-prod
# Set default profile
redisctl profile set-default cloud-prod
# Delete a profile
redisctl profile delete old-profile
# Use a specific profile
redisctl --profile cloud-prod cloud database list
Credential Storage Options
1. Environment Variables (CI/CD)
Best for automation and CI/CD pipelines:
# Cloud
export REDIS_CLOUD_API_KEY="..."
export REDIS_CLOUD_SECRET_KEY="..."
# Enterprise
export REDIS_ENTERPRISE_URL="..."
export REDIS_ENTERPRISE_USER="..."
export REDIS_ENTERPRISE_PASSWORD="..."
2. OS Keyring (Recommended for Local)
Store credentials securely in your operating system's keychain:
# Requires secure-storage feature
cargo install redisctl --features secure-storage
# Create profile with keyring storage
redisctl profile set production \
--deployment-type cloud \
--api-key "your-key" \
--api-secret "your-secret" \
--use-keyring
Your config file will contain references, not actual secrets:
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key"
api_secret = "keyring:production-api-secret"
3. Configuration File (Development Only)
For development, credentials can be stored in ~/.config/redisctl/config.toml:
default_profile = "dev"
[profiles.dev]
deployment_type = "cloud"
api_key = "your-api-key"
api_secret = "your-secret-key"
Warning: This stores credentials in plaintext. Use keyring or environment variables for production.
Configuration File Location
- Linux/macOS:
~/.config/redisctl/config.toml - Windows:
%APPDATA%\redis\redisctl\config.toml
Override Hierarchy
Settings are resolved in this order (later overrides earlier):
- Configuration file (profile settings)
- Environment variables
- Command-line flags
# Profile says cloud-prod, but CLI overrides
redisctl --profile cloud-prod --api-key "different-key" cloud database list
Security Best Practices
- Use OS keyring for local development
- Use environment variables for CI/CD
- Never commit credentials to version control
- Set file permissions:
chmod 600 ~/.config/redisctl/config.toml - Rotate credentials regularly
- Use read-only API keys when write access isn't needed
Troubleshooting
Authentication Failed
# Check what credentials are being used
redisctl profile get
# Enable debug logging
RUST_LOG=debug redisctl api cloud get /
Profile Not Found
# List available profiles
redisctl profile list
# Check config file exists
cat ~/.config/redisctl/config.toml
Certificate Errors (Enterprise)
For self-signed certificates:
# Via environment
export REDIS_ENTERPRISE_INSECURE="true"
# Or in profile
redisctl profile set myprofile --insecure
Output Formats
redisctl supports multiple output formats for different use cases.
Available Formats
JSON (Default)
redisctl cloud database list
redisctl enterprise cluster get
Output:
{
"name": "my-cluster",
"nodes": 3,
"version": "7.2.4"
}
Table
Human-readable tabular format:
redisctl cloud database list -o table
redisctl enterprise database list --output table
Output:
ID NAME MEMORY STATUS
1 cache 1GB active
2 sessions 512MB active
YAML
redisctl enterprise cluster get -o yaml
Output:
name: my-cluster
nodes: 3
version: 7.2.4
Combining with JMESPath
Filter and format in one command:
# JSON with filtered fields
redisctl enterprise database list -q "[].{name:name,memory:memory_size}"
# Table with specific columns
redisctl cloud subscription list -o table -q "[].{id:id,name:name,status:status}"
Use Cases
| Format | Best For |
|---|---|
| JSON | Scripting, CI/CD pipelines |
| Table | Interactive use, quick overview |
| YAML | Config files, readable structured data |
Using JMESPath Queries
Use the built-in -q/--query flag for filtering and transforming output without external tools:
# Get first database name
redisctl cloud database list -q '[0].name'
# Count items
redisctl enterprise database list -q 'length(@)'
# Get specific fields from all items
redisctl cloud subscription list -q '[].{id: id, name: name}'
# Filter by condition
redisctl enterprise database list -q "[?status=='active'].name"
# Get raw values for shell scripts (no JSON quotes)
redisctl cloud database list -q '[0].name' --raw
Note: JMESPath is built into redisctl, so you don't need external tools like
jqfor most operations.
JMESPath Queries
Filter and transform output using JMESPath expressions with the -q or --query flag. redisctl includes 300+ extended functions beyond standard JMESPath.
Basic Usage
# Get specific field
redisctl enterprise cluster get -q 'name'
# Get nested field
redisctl cloud database get 123:456 -q 'security.ssl_client_authentication'
# Get multiple fields
redisctl enterprise database get 1 -q '{name: name, memory: memory_size, port: port}'
Real-World Examples
These examples work with actual Redis Cloud API responses:
Counting and Aggregation
# Count all subscriptions
redisctl cloud subscription list -o json -q 'length(@)'
# Output: 192
# Aggregate statistics across all subscriptions
redisctl cloud subscription list -o json \
-q '{total_subscriptions: length(@), total_size_gb: sum([*].cloudDetails[0].totalSizeInGb), avg_size_gb: avg([*].cloudDetails[0].totalSizeInGb)}'
# Output:
# {
# "avg_size_gb": 1.96,
# "total_size_gb": 23.56,
# "total_subscriptions": 192
# }
Projections - Reshaping Data
# Extract specific fields from each subscription
redisctl cloud subscription list -o json \
-q '[*].{id: id, name: name, provider: cloudDetails[0].provider, region: cloudDetails[0].regions[0].region} | [:5]'
# Output:
# [
# {"id": 2983053, "name": "time-series-demo", "provider": "AWS", "region": "ap-southeast-1"},
# {"id": 2988697, "name": "workshop-sub", "provider": "AWS", "region": "us-east-1"},
# ...
# ]
Unique Values and Sorting
# Get unique cloud providers
redisctl cloud subscription list -o json -q '[*].cloudDetails[0].provider | unique(@)'
# Output: ["AWS", "GCP"]
# Get unique regions, sorted
redisctl cloud subscription list -o json \
-q '[*].cloudDetails[0].regions[0].region | unique(@) | sort(@)'
# Output: ["ap-northeast-1", "ap-south-1", "ap-southeast-1", "europe-west1", ...]
# Sort subscription names alphabetically
redisctl cloud subscription list -o json -q '[*].name | sort(@) | [:10]'
Filtering with Patterns
# Find subscriptions containing 'demo'
redisctl cloud subscription list -o json \
-q "[*].name | [?contains(@, 'demo')] | [:5]"
# Output: ["xw-time-series-demo", "gabs-redis-streams-demo", "anton-live-demo", ...]
# Filter by prefix
redisctl cloud subscription list -o json \
-q "[*].name | [?starts_with(@, 'gabs')] | [:5]"
# Output: ["gabs-aws-workshop-sub", "gabs-santander-rdi", "gabs-redis-streams-demo", ...]
# Filter by suffix
redisctl cloud subscription list -o json \
-q "[*].name | [?ends_with(@, 'demo')] | [:5]"
String Transformations
# Convert names to uppercase
redisctl cloud subscription list -o json -q 'map(&upper(name), [*]) | [:3]'
# Output: ["XW-TIME-SERIES-DEMO", "GABS-AWS-WORKSHOP-SUB", "BAMOS-TEST"]
# Replace substrings
redisctl cloud subscription list -o json \
-q "[*].{name: name, replaced: replace(name, 'demo', 'DEMO')} | [?contains(name, 'demo')] | [:3]"
# Output:
# [
# {"name": "xw-time-series-demo", "replaced": "xw-time-series-DEMO"},
# {"name": "gabs-redis-streams-demo", "replaced": "gabs-redis-streams-DEMO"},
# ...
# ]
Fuzzy Matching with Levenshtein Distance
# Find subscriptions with names similar to "production"
redisctl cloud subscription list -o json \
-q "[*].{name: name, distance: levenshtein(name, 'production')} | sort_by(@, &distance) | [:5]"
# Output:
# [
# {"distance": 8.0, "name": "piyush-db"},
# {"distance": 8.0, "name": "erni-rdi-1"},
# ...
# ]
Sorting by Computed Values
# Sort by name length (shortest first)
redisctl cloud subscription list -o json \
-q "[*].{name: name, len: length(name)} | sort_by(@, &len) | [:5]"
# Output:
# [
# {"len": 5, "name": "bgiri"},
# {"len": 6, "name": "abhidb"},
# {"len": 6, "name": "CM-rag"},
# ...
# ]
Array Operations
# Get all names from a list
redisctl enterprise database list -q '[].name'
# Get first item
redisctl cloud subscription list -q '[0]'
# Get specific fields from each item
redisctl enterprise database list -q '[].{id: uid, name: name, status: status}'
Filtering
# Filter by condition
redisctl enterprise database list -q "[?status=='active']"
# Filter and select fields
redisctl cloud database list -q "[?memoryLimitInGb > `1`].{name: name, memory: memoryLimitInGb}"
# Multiple conditions
redisctl enterprise database list -q "[?status=='active' && memory_size > `1073741824`]"
Pipelines
Chain operations together with | for complex transformations:
# Get unique regions -> sort -> count
redisctl cloud subscription list -o json \
-q '[*].cloudDetails[0].regions[0].region | unique(@) | sort(@) | length(@)'
# Output: 7
# Filter -> Sort -> Take top 3 -> Reshape
redisctl enterprise database list -q '
[?status==`active`]
| sort_by(@, &memory_size)
| reverse(@)
| [:3]
| [*].{name: name, memory_gb: to_string(memory_size / `1073741824`)}'
Extended Functions
redisctl includes 300+ extended JMESPath functions. Here are the most useful categories:
String Functions
# Case conversion
redisctl cloud subscription list -o json -q '[*].{name: name, upper_name: upper(name)} | [:3]'
# Trim whitespace
redisctl enterprise database list -q '[].{name: trim(name)}'
# Replace substrings
redisctl cloud subscription list -o json \
-q "[*].{original: name, modified: replace(name, '-', '_')} | [:3]"
Formatting Functions
# Format bytes (human readable)
redisctl enterprise database list -q '[].{name: name, memory: format_bytes(memory_size)}'
# Output: [{"name": "cache", "memory": "1.00 GB"}]
# Format duration
redisctl enterprise database list -q '[].{name: name, uptime: format_duration(uptime_seconds)}'
# Output: [{"name": "cache", "uptime": "2d 5h 30m"}]
Date/Time Functions
# Current timestamp
redisctl cloud subscription list -o json -q '{count: length(@), timestamp: now()}'
# Output: {"count": 192, "timestamp": 1765661197.0}
# Human-readable relative time
redisctl cloud task list -q '[].{id: id, created: time_ago(created_time)}'
# Output: [{"id": "task-123", "created": "2 hours ago"}]
Network Functions
# Check if IP is private
redisctl enterprise node list -q '[?is_private_ip(addr)].addr'
# Check CIDR containment
redisctl enterprise node list -q '[?cidr_contains(`"10.0.0.0/8"`, addr)]'
Math Functions
# Get max value
redisctl cloud subscription list -o json -q '[*].cloudDetails[0].totalSizeInGb | max(@)'
# Output: 23.2027
# Statistics
redisctl enterprise database list -q '{
total: sum([].memory_size),
avg: avg([].memory_size),
count: length(@)
}'
Semver Functions
# Compare versions
redisctl enterprise cluster get -q '{
version: version,
needs_upgrade: semver_compare(version, `"7.4.0"`) < `0`
}'
# Check version constraints
redisctl enterprise node list -q '[?semver_satisfies(redis_version, `">=7.0.0"`)]'
Type Functions
# Type checking
redisctl cloud subscription list -o json -q '[*].{name: name, type: type_of(id)} | [:3]'
# Output:
# [
# {"name": "xw-time-series-demo", "type": "number"},
# ...
# ]
# Default values for missing fields
redisctl cloud database get 123:456 -q '{name: name, region: default(region, `"unknown"`)}'
# Check if empty
redisctl enterprise database get 1 -q '{name: name, has_endpoints: not(is_empty(endpoints))}'
Utility Functions
# Unique values
redisctl cloud subscription list -o json -q 'unique([*].status)'
# Output: ["active"]
# Coalesce (first non-null)
redisctl cloud database get 123:456 -q '{region: coalesce(region, cloud_region, `"default"`)}'
Fuzzy Matching
# Levenshtein distance for fuzzy search
redisctl cloud subscription list -o json \
-q "[*].{name: name, distance: levenshtein(name, 'cache')} | sort_by(@, &distance) | [:5]"
# Find similar names (distance < 3)
redisctl enterprise database list -q '[?levenshtein(name, `"cache"`) < `3`]'
Encoding Functions
# Base64 encode/decode
redisctl enterprise cluster get -q '{encoded: base64_encode(name)}'
# URL encode/decode
redisctl api cloud get /subscriptions -q '[].{safe_name: url_encode(name)}'
Hash Functions
# Generate hashes
redisctl enterprise database list -q '[].{name: name, hash: sha256(name)}'
JSON Patch Functions
# Compare two configs
redisctl enterprise database get 1 -q 'json_diff(current_config, desired_config)'
Function Categories Reference
| Category | Example Functions |
|---|---|
| String | upper, lower, trim, replace, split, snake_case, camel_case |
| Array | unique, flatten, chunk, zip, intersection |
| Object | keys, values, pick, omit, deep_merge |
| Math | round, floor, ceil, sum, avg, max, min, stddev |
| Type | type_of, is_array, is_string, to_boolean |
| Utility | if, coalesce, default, now |
| DateTime | format_date, time_ago, relative_time, is_weekend |
| Duration | format_duration, parse_duration |
| Network | is_private_ip, cidr_contains, ip_to_int |
| Computing | format_bytes, parse_bytes, bit_and, bit_or |
| Validation | is_email, is_uuid, is_ipv4, is_url |
| Encoding | base64_encode, base64_decode, url_encode |
| Hash | md5, sha256, hmac_sha256 |
| Regex | regex_match, regex_replace, regex_extract |
| Semver | semver_compare, semver_satisfies, semver_parse |
| Fuzzy | levenshtein, soundex, jaro_winkler |
| JSON Patch | json_diff, json_patch, json_merge_patch |
For the complete list, see the jmespath-extensions documentation.
JMESPath Syntax Reference
| Syntax | Description | Example |
|---|---|---|
'value' | String literal | [?name=='cache'] |
`123` | Number literal | [?size > 1024] |
`true` | Boolean literal | [?active == true] |
@ | Current element | sort_by(@, &name) |
.field | Child access | cluster.name |
[0] | Index access | nodes[0] |
[0:5] | Slice | databases[:5] |
[?expr] | Filter | [?status=='active'] |
{k: v} | Multi-select | {id: uid, n: name} |
| | Pipe | [].name | sort(@) |
&field | Expression reference | sort_by(@, &name) |
For full syntax, see jmespath.org.
Combining with Output Formats
# Query then format as table
redisctl enterprise database list \
-q "[?status=='active'].{name:name,memory:memory_size}" \
-o table
Async Operations
Many Redis Cloud and Enterprise operations are asynchronous - they return immediately with a task ID while the work happens in the background. redisctl handles this automatically.
The --wait Flag
Use --wait to block until the operation completes:
# Returns immediately with task ID
redisctl cloud database create --subscription 123 --data '{...}'
# Waits for completion, returns final result
redisctl cloud database create --subscription 123 --data '{...}' --wait
Polling Options
Control how redisctl polls for completion:
redisctl cloud subscription create \
--data '{...}' \
--wait \
--poll-interval 10 \ # Check every 10 seconds (default: 5)
--max-wait 600 # Timeout after 10 minutes (default: 300)
Task Management
Check Task Status
# Cloud
redisctl cloud task get <task-id>
# Enterprise
redisctl enterprise action get <action-id>
List Recent Tasks
# Cloud - list all tasks
redisctl cloud task list
# Enterprise - list actions
redisctl enterprise action list
Common Async Operations
Redis Cloud
subscription create/deletedatabase create/update/deletevpc-peering create/deletecloud-account create/delete
Redis Enterprise
database create/update/deletecluster join/remove-nodemodule upload
Error Handling
When --wait is used and an operation fails:
$ redisctl cloud database create --data '{...}' --wait
Error: Task failed: Invalid memory configuration
# Check task details
$ redisctl cloud task get abc-123
{
"taskId": "abc-123",
"status": "failed",
"error": "Invalid memory configuration"
}
Scripting Patterns
Wait and Extract Result
# Create and get the new database ID
DB_ID=$(redisctl cloud database create \
--subscription 123 \
--data '{"name": "mydb"}' \
--wait \
-q 'databaseId')
echo "Created database: $DB_ID"
Fire and Forget
# Start multiple operations in parallel
redisctl cloud database delete 123 456 &
redisctl cloud database delete 123 789 &
wait
Custom Polling
# Start operation
TASK_ID=$(redisctl cloud database create --data '{...}' -q 'taskId')
# Custom polling loop
while true; do
STATUS=$(redisctl cloud task get $TASK_ID -q 'status')
echo "Status: $STATUS"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then
break
fi
sleep 10
done
Timeouts
If an operation exceeds --max-wait:
$ redisctl cloud subscription create --data '{...}' --wait --max-wait 60
Error: Operation timed out after 60 seconds. Task ID: abc-123
# Check manually
$ redisctl cloud task get abc-123
The operation continues in the background - only the CLI stops waiting.
Redis Cloud Overview
Redis Cloud is Redis's fully managed database service. redisctl provides complete CLI access to the Redis Cloud API.
Three-Tier Access
1. API Layer
Direct REST access for scripting and automation:
redisctl api cloud get /subscriptions
redisctl api cloud post /subscriptions -d @subscription.json
2. Commands
Human-friendly commands for day-to-day operations:
redisctl cloud subscription list
redisctl cloud database create --subscription 123 --data @db.json --wait
3. Workflows
Multi-step operations:
redisctl cloud workflow subscription-setup --name prod --region us-east-1
Key Concepts
Subscriptions
Subscriptions are the top-level container for databases. They define:
- Cloud provider (AWS, GCP, Azure)
- Region
- Memory allocation
- Networking configuration
Databases
Databases run within subscriptions. Each database has:
- Memory limit
- Modules (RedisJSON, RediSearch, etc.)
- Persistence settings
- Access credentials
Tasks
Most operations are async and return task IDs. Use --wait to block until completion.
Authentication
Redis Cloud uses API key authentication:
# Environment variables
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_SECRET_KEY="your-secret"
# Or profile
redisctl profile set cloud --deployment-type cloud --api-key "..." --api-secret "..."
Get your API keys from app.redislabs.com → Account Settings → API Keys.
Quick Examples
# List subscriptions
redisctl cloud subscription list -o table
# Create database and wait
redisctl cloud database create \
--subscription 123456 \
--data '{"name": "cache", "memoryLimitInGb": 1}' \
--wait
# Get database connection info
redisctl cloud database get 123456:789 \
-q '{endpoint: publicEndpoint, password: password}'
# Set up VPC peering
redisctl cloud vpc-peering create \
--subscription 123456 \
--data @peering.json \
--wait
Command Groups
- Databases - Create, update, delete databases
- Subscriptions - Manage subscriptions
- Access Control - Users, roles, ACLs
- Networking - VPC, PSC, Transit Gateway
- Tasks - Monitor async operations
Next Steps
- API Layer - Direct REST access
- Workflows - Multi-step operations
- Cloud Cookbook - Practical recipes
Cloud API Layer
Direct REST access to the Redis Cloud API for scripting and automation.
Overview
The API layer lets you call any Redis Cloud REST endpoint directly. It's like a smart curl with:
- Automatic authentication
- Profile support
- Output formatting
Usage
redisctl api cloud <method> <endpoint> [options]
Methods: get, post, put, delete
Examples
GET Requests
# Account info
redisctl api cloud get /
# List subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/123456
# List databases
redisctl api cloud get /subscriptions/123456/databases
# Get specific database
redisctl api cloud get /subscriptions/123456/databases/789
POST Requests
# Create subscription
redisctl api cloud post /subscriptions -d @subscription.json
# Create database
redisctl api cloud post /subscriptions/123456/databases -d '{
"name": "mydb",
"memoryLimitInGb": 1
}'
PUT Requests
# Update database
redisctl api cloud put /subscriptions/123456/databases/789 -d '{
"memoryLimitInGb": 2
}'
DELETE Requests
# Delete database
redisctl api cloud delete /subscriptions/123456/databases/789
# Delete subscription
redisctl api cloud delete /subscriptions/123456
Options
| Option | Description |
|---|---|
-d, --data <JSON> | Request body (inline or @file) |
-o, --output <FORMAT> | Output format (json, yaml, table) |
-q, --query <JMESPATH> | Filter output |
Common Endpoints
Account
GET /- Account infoGET /payment-methods- Payment methodsGET /regions- Available regions
Subscriptions
GET /subscriptions- List allPOST /subscriptions- CreateGET /subscriptions/{id}- Get onePUT /subscriptions/{id}- UpdateDELETE /subscriptions/{id}- Delete
Databases
GET /subscriptions/{id}/databases- ListPOST /subscriptions/{id}/databases- CreateGET /subscriptions/{id}/databases/{dbId}- GetPUT /subscriptions/{id}/databases/{dbId}- UpdateDELETE /subscriptions/{id}/databases/{dbId}- Delete
ACL
GET /acl/users- List usersGET /acl/roles- List rolesGET /acl/redisRules- List Redis rules
Networking
GET /subscriptions/{id}/peerings- VPC peeringsGET /subscriptions/{id}/privateServiceConnect- PSCGET /subscriptions/{id}/transitGateway- Transit Gateway
Tasks
GET /tasks- List tasksGET /tasks/{taskId}- Get task
Scripting Examples
Create and Wait
# Create database
TASK_ID=$(redisctl api cloud post /subscriptions/123/databases \
-d @database.json \
-q 'taskId')
# Poll for completion
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q 'status')
[ "$STATUS" = "processing-completed" ] && break
[ "$STATUS" = "processing-error" ] && exit 1
sleep 10
done
# Get result
redisctl api cloud get /tasks/$TASK_ID -q 'response.resourceId'
Bulk Operations
# Get all database IDs and process each
for db in $(redisctl api cloud get /subscriptions/123/databases -q '[].databaseId' --raw); do
redisctl api cloud get /subscriptions/123/databases/$db
done
Export to File
# Save subscription config
redisctl api cloud get /subscriptions/123 > subscription.json
# Save all databases
redisctl api cloud get /subscriptions/123/databases > databases.json
When to Use API Layer
Use API layer when:
- Endpoint isn't wrapped in human commands yet
- You need exact control over the request
- Building automation scripts
- Exploring the API
Use human commands when:
- There's a command for what you need
- You want built-in
--waitsupport - You prefer ergonomic flags over JSON
API Documentation
Full API documentation: Redis Cloud API Docs
Databases
Manage Redis Cloud databases within subscriptions.
Commands
List Databases
List all databases across subscriptions or in a specific subscription.
redisctl cloud database list [OPTIONS]
Options:
--subscription <ID>- Filter by subscription ID-o, --output <FORMAT>- Output format: json, yaml, or table-q, --query <JMESPATH>- JMESPath query to filter output
Examples:
# List all databases
redisctl cloud database list
# List databases in specific subscription
redisctl cloud database list --subscription 123456
# Table format
redisctl cloud database list -o table
# Filter active databases
redisctl cloud database list -q "[?status=='active']"
# Get names and endpoints
redisctl cloud database list -q "[].{name: name, endpoint: publicEndpoint}"
Get Database
Get details of a specific database.
redisctl cloud database get <ID> [OPTIONS]
Arguments:
<ID>- Database ID (format:subscription_id:database_id)
Examples:
# Get database details
redisctl cloud database get 123456:789
# Get connection details
redisctl cloud database get 123456:789 \
-q "{endpoint: publicEndpoint, port: port, password: password}"
Create Database
Create a new database in a subscription.
redisctl cloud database create --subscription <ID> [OPTIONS]
Required:
--subscription <ID>- Subscription ID
Options:
--name <NAME>- Database name--memory <GB>- Memory limit in GB--protocol <PROTOCOL>- redis or memcached--data <JSON>- Full configuration as JSON--wait- Wait for completion
Examples:
# Create with flags
redisctl cloud database create \
--subscription 123456 \
--name mydb \
--memory 1
# Create with JSON data
redisctl cloud database create \
--subscription 123456 \
--data @database.json \
--wait
# Create with inline JSON
redisctl cloud database create \
--subscription 123456 \
--data '{"name": "test-db", "memoryLimitInGb": 1}'
Example JSON Payload:
{
"name": "production-cache",
"memoryLimitInGb": 4,
"protocol": "redis",
"replication": true,
"dataPersistence": "aof-every-write",
"modules": [
{"name": "RedisJSON"},
{"name": "RediSearch"}
]
}
Update Database
Update database configuration.
redisctl cloud database update <ID> [OPTIONS]
Arguments:
<ID>- Database ID (format:subscription_id:database_id)
Options:
--data <JSON>- Updates to apply--wait- Wait for completion
Examples:
# Increase memory
redisctl cloud database update 123456:789 \
--data '{"memoryLimitInGb": 8}' \
--wait
# Update eviction policy
redisctl cloud database update 123456:789 \
--data '{"dataEvictionPolicy": "volatile-lru"}'
Delete Database
Delete a database.
redisctl cloud database delete <ID> [OPTIONS]
Arguments:
<ID>- Database ID (format:subscription_id:database_id)
Options:
--wait- Wait for deletion to complete
Examples:
# Delete database
redisctl cloud database delete 123456:789
# Delete and wait
redisctl cloud database delete 123456:789 --wait
Database Operations
Backup Database
Trigger a manual backup.
redisctl cloud database backup <ID> [OPTIONS]
Examples:
redisctl cloud database backup 123456:789
redisctl cloud database backup 123456:789 --wait
Import Data
Import data into a database.
redisctl cloud database import <ID> --data <JSON> [OPTIONS]
Example:
redisctl cloud database import 123456:789 --data '{
"sourceType": "s3",
"importFromUri": ["s3://bucket/backup.rdb"]
}'
Common Patterns
Get Connection String
ENDPOINT=$(redisctl cloud database get 123456:789 -q 'publicEndpoint')
PASSWORD=$(redisctl cloud database get 123456:789 -q 'password')
echo "redis://:$PASSWORD@$ENDPOINT"
Monitor Databases
# Check memory usage across all databases
redisctl cloud database list \
-q "[].{name: name, used: usedMemoryInMB, limit: memoryLimitInGb}" \
-o table
Bulk Operations
# Process all database IDs
for db in $(redisctl cloud database list -q '[].databaseId' --raw); do
echo "Processing $db"
done
Troubleshooting
"Database creation failed"
- Check subscription has available resources
- Verify region supports requested features
- Check module compatibility
"Cannot connect"
- Verify database is active: check
statusfield - Check firewall/security group rules
- Ensure correct endpoint and port
API Reference
REST endpoints:
GET /v1/subscriptions/{subId}/databases- ListPOST /v1/subscriptions/{subId}/databases- CreateGET /v1/subscriptions/{subId}/databases/{dbId}- GetPUT /v1/subscriptions/{subId}/databases/{dbId}- UpdateDELETE /v1/subscriptions/{subId}/databases/{dbId}- Delete
For direct API access: redisctl api cloud get /subscriptions/123456/databases
Subscriptions
Manage Redis Cloud subscriptions - the containers for your databases and configuration.
Commands
List Subscriptions
List all subscriptions in your account.
redisctl cloud subscription list [OPTIONS]
Options:
-o, --output <FORMAT>- Output format: json, yaml, or table (default: auto)-q, --query <JMESPATH>- JMESPath query to filter output
Examples:
# List all subscriptions
redisctl cloud subscription list
# Table format with specific fields
redisctl cloud subscription list -o table
# Get only subscription IDs and names
redisctl cloud subscription list -q "[].{id: id, name: name}"
# Filter by status
redisctl cloud subscription list -q "[?status=='active']"
Get Subscription
Get details of a specific subscription.
redisctl cloud subscription get <ID> [OPTIONS]
Arguments:
<ID>- Subscription ID
Options:
-o, --output <FORMAT>- Output format: json, yaml, or table-q, --query <JMESPATH>- JMESPath query to filter output
Examples:
# Get subscription details
redisctl cloud subscription get 123456
# Get specific fields in YAML
redisctl cloud subscription get 123456 -o yaml -q "{name: name, status: status, databases: numberOfDatabases}"
Create Subscription
Create a new subscription.
redisctl cloud subscription create --data <JSON> [OPTIONS]
Options:
--data <JSON>- JSON payload (inline or @file.json)--wait- Wait for operation to complete--wait-timeout <SECONDS>- Maximum time to wait (default: 600)--wait-interval <SECONDS>- Polling interval (default: 10)
Example Payload:
{
"name": "Production Subscription",
"cloudProvider": {
"provider": "AWS",
"regions": [
{
"region": "us-east-1",
"multipleAvailabilityZones": true,
"networking": {
"deploymentCIDR": "10.0.0.0/24"
}
}
]
},
"databases": [
{
"name": "cache-db",
"memoryLimitInGb": 1,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 10000
}
}
]
}
Examples:
# Create subscription from file
redisctl cloud subscription create --data @subscription.json
# Create and wait for completion
redisctl cloud subscription create --data @subscription.json --wait
# Create with inline JSON
redisctl cloud subscription create --data '{
"name": "Test Subscription",
"cloudProvider": {"provider": "AWS", "regions": [{"region": "us-east-1"}]}
}'
Update Subscription
Update an existing subscription.
redisctl cloud subscription update <ID> --data <JSON> [OPTIONS]
Arguments:
<ID>- Subscription ID
Options:
--data <JSON>- JSON payload with updates--wait- Wait for operation to complete--wait-timeout <SECONDS>- Maximum time to wait--wait-interval <SECONDS>- Polling interval
Examples:
# Update subscription name
redisctl cloud subscription update 123456 --data '{"name": "New Name"}'
# Update payment method
redisctl cloud subscription update 123456 --data '{"paymentMethodId": 8840}' --wait
Delete Subscription
Delete a subscription (requires all databases to be deleted first).
redisctl cloud subscription delete <ID> [OPTIONS]
Arguments:
<ID>- Subscription ID
Options:
--wait- Wait for deletion to complete--wait-timeout <SECONDS>- Maximum time to wait--wait-interval <SECONDS>- Polling interval
Examples:
# Delete subscription
redisctl cloud subscription delete 123456
# Delete and wait for completion
redisctl cloud subscription delete 123456 --wait
Fixed Subscriptions
Fixed subscriptions offer reserved capacity with predictable pricing.
List Fixed Subscriptions
redisctl cloud fixed-subscription list
Get Fixed Subscription
redisctl cloud fixed-subscription get <ID>
Create Fixed Subscription
redisctl cloud fixed-subscription create --data @fixed-subscription.json --wait
Example Payload:
{
"name": "Fixed Production",
"plan": {
"provider": "AWS",
"region": "us-east-1",
"size": "r5.xlarge"
},
"quantity": 2
}
Related Commands
- Databases - Manage databases within subscriptions
- Network Connectivity - Configure VPC peering and private endpoints
- Provider Accounts - Manage cloud provider integrations
Common Patterns
List All Databases Across Subscriptions
# List databases for each subscription
for sub in $(redisctl cloud subscription list -q '[].id' --raw); do
echo "Subscription $sub:"
redisctl cloud database list --subscription $sub
done
Monitor Subscription Usage
# Get memory usage across all databases
redisctl cloud subscription get 123456 \
-q "databases[].{name: name, memory: memoryLimitInGb}" -o table
Troubleshooting
Common Issues
"Subscription not found"
- Verify the subscription ID is correct
- Check that your API key has access to the subscription
"Cannot delete subscription with active databases"
- Delete all databases first:
redisctl cloud database list --subscription <ID> - Then delete each database before deleting the subscription
"Operation timeout"
- Increase timeout:
--wait-timeout 1200 - Check operation status:
redisctl cloud task get <TASK_ID>
API Reference
These commands use the following REST endpoints:
GET /v1/subscriptions- List subscriptionsGET /v1/subscriptions/{id}- Get subscriptionPOST /v1/subscriptions- Create subscriptionPUT /v1/subscriptions/{id}- Update subscriptionDELETE /v1/subscriptions/{id}- Delete subscription
For direct API access, use: redisctl api cloud get /subscriptions
Cloud Access Control
Manage users, roles, and ACLs for Redis Cloud.
Users
List Users
redisctl cloud user list
Get User
redisctl cloud user get <user-id>
Create User
redisctl cloud user create --data '{
"name": "app-user",
"email": "user@example.com",
"role": "viewer"
}'
Update User
redisctl cloud user update <user-id> --data '{
"role": "member"
}'
Delete User
redisctl cloud user delete <user-id>
Roles
List Roles
redisctl cloud acl role list
Get Role
redisctl cloud acl role get <role-id>
Create Role
redisctl cloud acl role create --data '{
"name": "read-only",
"redisRules": [
{
"ruleName": "Read-Only",
"databases": [
{"subscriptionId": 123456, "databaseId": 789}
]
}
]
}'
Update Role
redisctl cloud acl role update <role-id> --data '{
"name": "read-write"
}'
Delete Role
redisctl cloud acl role delete <role-id>
Redis Rules
Redis ACL rules define permissions at the Redis command level.
List Redis Rules
redisctl cloud acl redis-rule list
Get Redis Rule
redisctl cloud acl redis-rule get <rule-id>
Create Redis Rule
redisctl cloud acl redis-rule create --data '{
"name": "Read-Only",
"acl": "+@read ~*"
}'
Common ACL Patterns
| Pattern | Description |
|---|---|
+@all ~* | Full access to all keys |
+@read ~* | Read-only access |
+@write ~cache:* | Write only to cache:* keys |
-@dangerous | Deny dangerous commands |
Examples
Set Up Read-Only User
# Create redis rule
redisctl cloud acl redis-rule create --data '{
"name": "readonly-rule",
"acl": "+@read -@dangerous ~*"
}'
# Create role with rule
redisctl cloud acl role create --data '{
"name": "readonly-role",
"redisRules": [{"ruleName": "readonly-rule", "databases": [...]}]
}'
Audit Access
# List all users and their roles
redisctl cloud user list -q "[].{name:name,role:role,email:email}" -o table
API Reference
These commands use the following REST endpoints:
GET/POST /v1/acl/users- User managementGET/POST /v1/acl/roles- Role managementGET/POST /v1/acl/redisRules- Redis rule management
For direct API access: redisctl api cloud get /acl/users
Cloud Networking
Configure VPC peering, Private Service Connect (PSC), and Transit Gateway for Redis Cloud.
VPC Peering
Connect your Redis Cloud subscription to your VPC.
List VPC Peerings
redisctl cloud vpc-peering list --subscription <ID>
Get VPC Peering
redisctl cloud vpc-peering get --subscription <ID> --peering-id <PEERING_ID>
Create VPC Peering
redisctl cloud vpc-peering create --subscription <ID> --data '{
"region": "us-east-1",
"awsAccountId": "123456789012",
"vpcId": "vpc-abc123",
"vpcCidr": "10.0.0.0/16"
}' --wait
Delete VPC Peering
redisctl cloud vpc-peering delete --subscription <ID> --peering-id <PEERING_ID> --wait
AWS Setup
After creating the peering in redisctl:
- Get the peering request ID from the response
- In AWS Console, go to VPC → Peering Connections
- Accept the peering request
- Update route tables to route traffic to Redis Cloud CIDR
Private Service Connect (GCP)
Create PSC Service
redisctl cloud psc create-service --subscription <ID> --data '{
"region": "us-central1"
}' --wait
Create PSC Endpoint
redisctl cloud psc create-endpoint --subscription <ID> --data '{
"serviceId": "psc-123",
"endpointName": "redis-endpoint"
}' --wait
List PSC Services
redisctl cloud psc list-services --subscription <ID>
Transit Gateway (AWS)
Create Transit Gateway Attachment
redisctl cloud tgw create --subscription <ID> --data '{
"region": "us-east-1",
"transitGatewayId": "tgw-abc123",
"cidrs": ["10.0.0.0/16"]
}' --wait
List Transit Gateway Attachments
redisctl cloud tgw list --subscription <ID>
Delete Transit Gateway Attachment
redisctl cloud tgw delete --subscription <ID> --tgw-id <TGW_ID> --wait
CIDR Allowlist
Control which IP ranges can access your subscription.
Get CIDR Allowlist
redisctl cloud subscription get-cidr --subscription <ID>
Update CIDR Allowlist
redisctl cloud subscription update-cidr --subscription <ID> --data '{
"cidrIps": ["10.0.0.0/16", "192.168.1.0/24"],
"securityGroupIds": ["sg-abc123"]
}'
Examples
Set Up AWS VPC Peering
# Create peering
PEERING=$(redisctl cloud vpc-peering create \
--subscription 123456 \
--data '{
"region": "us-east-1",
"awsAccountId": "123456789012",
"vpcId": "vpc-abc123",
"vpcCidr": "10.0.0.0/16"
}' --wait)
echo "Accept peering request in AWS Console"
echo "Peering ID: $(redisctl cloud vpc-peering list --subscription 123456 -q '[0].vpcPeeringId')"
List All Network Connections
# VPC peerings
redisctl cloud vpc-peering list --subscription 123456 -o table
# PSC services
redisctl cloud psc list-services --subscription 123456 -o table
# Transit gateways
redisctl cloud tgw list --subscription 123456 -o table
Active-Active Networking
For Active-Active subscriptions, use the --active-active flag:
redisctl cloud vpc-peering create-active-active \
--subscription <ID> \
--region us-east-1 \
--data '{...}' --wait
Troubleshooting
Peering Stuck in Pending
- Ensure you've accepted the peering request in your cloud console
- Verify the VPC CIDR doesn't overlap with Redis Cloud CIDR
- Check IAM permissions for peering operations
Cannot Connect After Peering
- Update route tables in your VPC
- Check security group rules allow Redis ports (default: 10000+)
- Verify DNS resolution if using private endpoints
API Reference
These commands use the following REST endpoints:
GET/POST /v1/subscriptions/{id}/peerings- VPC peeringGET/POST /v1/subscriptions/{id}/privateServiceConnect- PSCGET/POST /v1/subscriptions/{id}/transitGateway- Transit Gateway
For direct API access: redisctl api cloud get /subscriptions/123456/peerings
Cloud Tasks
Monitor async operations in Redis Cloud.
Overview
Most Redis Cloud operations (create, update, delete) are asynchronous. They return a task ID immediately while the work happens in the background. Use these commands to monitor task progress.
Commands
List Tasks
redisctl cloud task list
Get Task
redisctl cloud task get <task-id>
Example Output:
{
"taskId": "abc-123-def",
"commandType": "createDatabase",
"status": "processing-completed",
"description": "Create database",
"timestamp": "2024-01-15T10:30:00Z",
"response": {
"resourceId": 789
}
}
Task States
| Status | Description |
|---|---|
received | Task received, queued for processing |
processing-in-progress | Task is currently executing |
processing-completed | Task completed successfully |
processing-error | Task failed with error |
Examples
Check Task Status
# Get task details
redisctl cloud task get abc-123-def
# Get just the status
redisctl cloud task get abc-123-def -q 'status'
# Get error message if failed
redisctl cloud task get abc-123-def -q 'response.error.description'
Wait for Task Completion
# Using --wait flag (recommended)
redisctl cloud database create --subscription 123 --data '{...}' --wait
# Manual polling
TASK_ID=$(redisctl cloud database create --subscription 123 --data '{...}' -q 'taskId')
while true; do
STATUS=$(redisctl cloud task get $TASK_ID -q 'status')
echo "Status: $STATUS"
case $STATUS in
"processing-completed") echo "Success!"; break ;;
"processing-error") echo "Failed!"; exit 1 ;;
*) sleep 10 ;;
esac
done
List Recent Tasks
# All recent tasks
redisctl cloud task list -o table
# Filter by type
redisctl cloud task list -q "[?commandType=='createDatabase']"
# Failed tasks only
redisctl cloud task list -q "[?status=='processing-error']"
Get Resource ID from Completed Task
# After task completes, get the created resource ID
redisctl cloud task get abc-123-def -q 'response.resourceId'
Common Task Types
| Command Type | Description |
|---|---|
createSubscription | Create subscription |
deleteSubscription | Delete subscription |
createDatabase | Create database |
updateDatabase | Update database |
deleteDatabase | Delete database |
createVpcPeering | Create VPC peering |
Troubleshooting
Task Not Found
Tasks are retained for a limited time. If you get a 404:
- The task may have expired
- Check the task ID is correct
Task Stuck in Processing
If a task stays in processing-in-progress for too long:
- Check Redis Cloud status page for outages
- Contact support if it exceeds expected time
Understanding Errors
# Get full error details
redisctl cloud task get abc-123-def -q 'response.error'
Common errors:
INVALID_REQUEST- Bad input dataRESOURCE_LIMIT_EXCEEDED- Quota exceededRESOURCE_NOT_FOUND- Referenced resource doesn't exist
API Reference
These commands use the following REST endpoints:
GET /v1/tasks- List all tasksGET /v1/tasks/{taskId}- Get specific task
For direct API access: redisctl api cloud get /tasks
Cloud Workflows
Multi-step operations that orchestrate multiple API calls with automatic polling and error handling.
Available Workflows
subscription-setup
Create a complete subscription with optional VPC peering and initial database.
redisctl cloud workflow subscription-setup \
--name production \
--cloud-provider aws \
--region us-east-1 \
--memory-limit-in-gb 10
What it does:
- Creates the subscription
- Waits for subscription to be active
- Optionally sets up VPC peering
- Optionally creates initial database
- Returns connection details
Options:
| Option | Description |
|---|---|
--name | Subscription name |
--cloud-provider | AWS, GCP, or Azure |
--region | Cloud region |
--memory-limit-in-gb | Total memory allocation |
--with-vpc-peering | Set up VPC peering |
--vpc-id | Your VPC ID (for peering) |
--vpc-cidr | Your VPC CIDR (for peering) |
--create-database | Create initial database |
--database-name | Name for initial database |
Example with VPC Peering:
redisctl cloud workflow subscription-setup \
--name production \
--cloud-provider aws \
--region us-east-1 \
--memory-limit-in-gb 10 \
--with-vpc-peering \
--vpc-id vpc-abc123 \
--vpc-cidr 10.0.0.0/16 \
--create-database \
--database-name cache
When to Use Workflows
Use workflows when you need to:
- Perform multiple related operations in sequence
- Handle async operations with proper waiting
- Get a complete setup done in one command
Use individual commands when you need:
- Fine-grained control over each step
- Custom error handling
- Partial operations
Creating Custom Workflows
For operations not covered by built-in workflows, you can script them:
#!/bin/bash
set -e
# Create subscription
SUB_ID=$(redisctl cloud subscription create \
--data @subscription.json \
--wait \
-q 'id')
echo "Created subscription: $SUB_ID"
# Set up VPC peering
redisctl cloud vpc-peering create \
--subscription $SUB_ID \
--data @peering.json \
--wait
echo "VPC peering created - accept in AWS console"
# Create database
DB_ID=$(redisctl cloud database create \
--subscription $SUB_ID \
--data @database.json \
--wait \
-q 'databaseId')
echo "Created database: $DB_ID"
# Get connection info
redisctl cloud database get \
--subscription $SUB_ID \
--database-id $DB_ID \
-q '{endpoint: publicEndpoint, password: password}'
Error Handling
Workflows handle errors gracefully:
- Failed steps report clear error messages
- Partial progress is preserved (you can resume manually)
- Resources created before failure remain (clean up if needed)
# If workflow fails, check what was created
redisctl cloud subscription list -o table
redisctl cloud vpc-peering list --subscription <ID>
Comparison: Workflow vs Manual
With workflow:
redisctl cloud workflow subscription-setup \
--name prod --cloud-provider aws --region us-east-1 \
--memory-limit-in-gb 10 --create-database --database-name cache
Manual equivalent:
# 1. Create subscription and get ID
SUB_ID=$(redisctl cloud subscription create --data '{...}' --wait -q 'id')
# 2. Wait for active status
while [ "$(redisctl cloud subscription get $SUB_ID -q 'status')" != "active" ]; do
sleep 10
done
# 3. Create database
redisctl cloud database create --subscription $SUB_ID --data '{...}' --wait
# 4. Get connection info
redisctl cloud database list --subscription $SUB_ID
The workflow handles all the waiting, polling, and sequencing automatically.
Redis Enterprise Overview
Redis Enterprise is Redis's self-managed database platform for on-premises or cloud deployments. redisctl provides complete CLI access to the REST API.
Three-Tier Access
1. API Layer
Direct REST access for scripting and automation:
redisctl api enterprise get /v1/cluster
redisctl api enterprise post /v1/bdbs -d @database.json
2. Commands
Human-friendly commands for day-to-day operations:
redisctl enterprise cluster get
redisctl enterprise database create --name mydb --memory-size 1073741824
3. Workflows
Multi-step operations:
redisctl enterprise workflow init-cluster --name prod --nodes 3
Key Concepts
Cluster
The cluster is the top-level container that spans multiple nodes. It manages:
- Node membership
- Resource allocation
- Policies and certificates
- License
Nodes
Physical or virtual machines running Redis Enterprise. Each node provides:
- CPU and memory resources
- Network connectivity
- Storage for persistence
Databases (BDBs)
Databases run across the cluster. Each database has:
- Memory allocation
- Sharding configuration
- Replication settings
- Modules (RedisJSON, RediSearch, etc.)
Authentication
Redis Enterprise uses basic authentication:
# Environment variables
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # for self-signed certs
# Or profile
redisctl profile set enterprise \
--deployment-type enterprise \
--url "https://cluster.example.com:9443" \
--username "admin@cluster.local" \
--password "your-password" \
--insecure
Quick Examples
# Get cluster info
redisctl enterprise cluster get
# List databases
redisctl enterprise database list -o table
# Create database
redisctl enterprise database create \
--name cache \
--memory-size 1073741824
# Stream cluster stats
redisctl enterprise stats cluster --follow
# Generate support package
redisctl enterprise support-package cluster --upload
Command Groups
- Cluster - Cluster config, certs, policies
- Databases - Create, update, delete databases
- Nodes - Manage cluster nodes
- Access Control - Users, roles, LDAP
- Monitoring - Stats, logs, alerts
- Active-Active - CRDB operations
Operations
Special tools for cluster management:
- Support Package - Diagnostic packages
- License - License management
- Debug Info - Detailed diagnostics
- Diagnostics - Health checks
- Migrations - Data migrations
Next Steps
- API Layer - Direct REST access
- Workflows - Multi-step operations
- Enterprise Cookbook - Practical recipes
Enterprise API Layer
Direct REST access to the Redis Enterprise API for scripting and automation.
Overview
The API layer lets you call any Redis Enterprise REST endpoint directly. It's like a smart curl with:
- Automatic authentication
- Profile support
- Output formatting
- SSL handling
Usage
redisctl api enterprise <method> <endpoint> [options]
Methods: get, post, put, delete
Examples
GET Requests
# Cluster info
redisctl api enterprise get /v1/cluster
# List nodes
redisctl api enterprise get /v1/nodes
# List databases
redisctl api enterprise get /v1/bdbs
# Get specific database
redisctl api enterprise get /v1/bdbs/1
# Get users
redisctl api enterprise get /v1/users
POST Requests
# Create database
redisctl api enterprise post /v1/bdbs -d '{
"name": "mydb",
"memory_size": 1073741824
}'
# Create user
redisctl api enterprise post /v1/users -d @user.json
PUT Requests
# Update database
redisctl api enterprise put /v1/bdbs/1 -d '{
"memory_size": 2147483648
}'
# Update cluster
redisctl api enterprise put /v1/cluster -d '{"name": "new-name"}'
DELETE Requests
# Delete database
redisctl api enterprise delete /v1/bdbs/1
# Remove node
redisctl api enterprise delete /v1/nodes/3
Options
| Option | Description |
|---|---|
-d, --data <JSON> | Request body (inline or @file) |
-o, --output <FORMAT> | Output format (json, yaml, table) |
-q, --query <JMESPATH> | Filter output |
Common Endpoints
Cluster
GET /v1/cluster- Cluster infoPUT /v1/cluster- Update clusterGET /v1/cluster/stats/last- Cluster statsGET /v1/cluster/certificates- Certificates
Nodes
GET /v1/nodes- List nodesGET /v1/nodes/{id}- Get nodePUT /v1/nodes/{id}- Update nodeDELETE /v1/nodes/{id}- Remove nodeGET /v1/nodes/{id}/stats/last- Node stats
Databases
GET /v1/bdbs- List databasesPOST /v1/bdbs- Create databaseGET /v1/bdbs/{id}- Get databasePUT /v1/bdbs/{id}- Update databaseDELETE /v1/bdbs/{id}- Delete databaseGET /v1/bdbs/{id}/stats/last- Database stats
Users & Roles
GET /v1/users- List usersPOST /v1/users- Create userGET /v1/roles- List rolesGET /v1/redis_acls- List ACLs
Active-Active
GET /v1/crdbs- List CRDBsPOST /v1/crdbs- Create CRDBGET /v1/crdb_tasks- List tasks
Logs & Alerts
GET /v1/logs- Get logsGET /v1/cluster/alerts- Get alerts
Debug & Support
GET /v1/debuginfo/all- Full debug info (binary)GET /v1/debuginfo/node/{id}- Node debug info
Scripting Examples
Export Cluster Config
# Save cluster configuration
redisctl api enterprise get /v1/cluster > cluster-config.json
redisctl api enterprise get /v1/bdbs > databases.json
redisctl api enterprise get /v1/users > users.json
Bulk Database Creation
# Create multiple databases
for name in cache sessions analytics; do
redisctl api enterprise post /v1/bdbs -d "{
\"name\": \"$name\",
\"memory_size\": 1073741824
}"
done
Health Check
#!/bin/bash
# Check cluster health via API
STATUS=$(redisctl api enterprise get /v1/cluster -q 'status')
if [ "$STATUS" != "active" ]; then
echo "Cluster unhealthy: $STATUS"
exit 1
fi
# Check nodes
redisctl api enterprise get /v1/nodes -q '[].{id: uid, status: status}' -o table
Watch Stats
# Poll stats every 5 seconds
while true; do
redisctl api enterprise get /v1/cluster/stats/last \
-q '{cpu:cpu_user,memory:free_memory}'
sleep 5
done
Binary Responses
Some endpoints return binary data (tar.gz):
# Download debug info
redisctl api enterprise get /v1/debuginfo/all --output debug.tar.gz
When to Use API Layer
Use API layer when:
- Endpoint isn't wrapped in human commands
- You need exact control over the request
- Building automation scripts
- Exploring the API
Use human commands when:
- There's a command for what you need
- You want ergonomic flags
- You prefer structured output
API Documentation
The cluster provides built-in API docs at:
https://your-cluster:9443/v1/swagger-ui/index.html
Cluster
Manage Redis Enterprise cluster configuration and operations.
Commands
Get Cluster Info
Get current cluster configuration and status.
redisctl enterprise cluster info [OPTIONS]
Options:
-o, --output <FORMAT>- Output format: json, yaml, or table-q, --query <JMESPATH>- JMESPath query to filter output
Examples:
# Get full cluster information
redisctl enterprise cluster info
# Get specific fields in table format
redisctl enterprise cluster info -o table
# Get cluster name and version
redisctl enterprise cluster info -q "{name: name, version: version}"
# Check cluster health
redisctl enterprise cluster info -q "alert_settings"
Update Cluster
Update cluster configuration.
redisctl enterprise cluster update --data <JSON> [OPTIONS]
Options:
--data <JSON>- Configuration updates (inline or @file.json)
Examples:
# Update cluster name
redisctl enterprise cluster update --data '{"name": "production-cluster"}'
# Update alert settings
redisctl enterprise cluster update --data '{
"alert_settings": {
"cluster_certs_about_to_expire": {"enabled": true, "threshold": 30}
}
}'
# Update from file
redisctl enterprise cluster update --data @cluster-config.json
Get Cluster Policy
Get cluster-wide policies.
redisctl enterprise cluster get-policy [OPTIONS]
Examples:
# Get all policies
redisctl enterprise cluster get-policy
# Get specific policy in YAML
redisctl enterprise cluster get-policy -o yaml -q "rack_aware"
Update Cluster Policy
Update cluster policies.
redisctl enterprise cluster update-policy --data <JSON> [OPTIONS]
Examples:
# Enable rack awareness
redisctl enterprise cluster update-policy --data '{"rack_aware": true}'
# Update multiple policies
redisctl enterprise cluster update-policy --data '{
"rack_aware": true,
"default_non_sharded_proxy_policy": "all-master-shards"
}'
Certificate Management
List Certificates
List cluster certificates.
redisctl enterprise cluster list-certificates [OPTIONS]
Examples:
# List all certificates
redisctl enterprise cluster list-certificates
# Check certificate expiration
redisctl enterprise cluster list-certificates -q "[].{name: name, expires: expiry_date}"
Update Certificate
Update cluster certificate.
redisctl enterprise cluster update-certificate --data <JSON> [OPTIONS]
Example Payload:
{
"name": "api-cert",
"key": "-----BEGIN RSA PRIVATE KEY-----\n...",
"certificate": "-----BEGIN CERTIFICATE-----\n..."
}
Examples:
# Update API certificate
redisctl enterprise cluster update-certificate --data @new-cert.json
# Update proxy certificate
redisctl enterprise cluster update-certificate --data '{
"name": "proxy-cert",
"key": "...",
"certificate": "..."
}'
Rotate Certificates
Rotate cluster certificates.
redisctl enterprise cluster rotate-certificates [OPTIONS]
Examples:
# Rotate all certificates
redisctl enterprise cluster rotate-certificates
# Rotate with custom validity period
redisctl enterprise cluster rotate-certificates --days 365
Cluster Operations
Check Cluster Status
Get detailed cluster status.
redisctl enterprise cluster status [OPTIONS]
Examples:
# Full status check
redisctl enterprise cluster status
# Check specific components
redisctl enterprise cluster status -q "services"
Get Cluster Stats
Get cluster statistics.
redisctl enterprise cluster stats [OPTIONS]
Options:
--interval <SECONDS>- Stats interval (1sec, 1min, 5min, 15min, 1hour, 1day)
Examples:
# Get current stats
redisctl enterprise cluster stats
# Get hourly stats
redisctl enterprise cluster stats --interval 1hour
# Get memory usage
redisctl enterprise cluster stats -q "{used: used_memory, total: total_memory}"
License Management
Get License
redisctl enterprise cluster get-license
Update License
redisctl enterprise cluster update-license --data <JSON>
Example:
# Update license
redisctl enterprise cluster update-license --data '{
"license": "-----BEGIN LICENSE-----\n...\n-----END LICENSE-----"
}'
Module Management
List Modules
List available Redis modules.
redisctl enterprise module list
Upload Module
Upload a new module.
redisctl enterprise module upload --file <PATH>
Examples:
# Upload module
redisctl enterprise module upload --file redisgraph.zip
# Upload and get module ID
MODULE_ID=$(redisctl enterprise module upload --file module.zip -q "uid")
Common Patterns
Health Check Script
#!/bin/bash
# Check cluster health
STATUS=$(redisctl enterprise cluster info -q "status")
if [ "$STATUS" != "active" ]; then
echo "Cluster not healthy: $STATUS"
exit 1
fi
# Check certificate expiration
DAYS_LEFT=$(redisctl enterprise cluster list-certificates \
-q "[0].days_until_expiry")
if [ "$DAYS_LEFT" -lt 30 ]; then
echo "Certificate expiring soon: $DAYS_LEFT days"
fi
Monitor Cluster Resources
# Get resource utilization
redisctl enterprise cluster stats -q "{
cpu: cpu_usage_percent,
memory: memory_usage_percent,
disk: persistent_storage_usage_percent
}"
Backup Cluster Configuration
# Export cluster config
redisctl enterprise cluster info > cluster-backup-$(date +%Y%m%d).json
# Export policies
redisctl enterprise cluster get-policy > policies-backup-$(date +%Y%m%d).json
Troubleshooting
Common Issues
"Cluster not responding"
- Check network connectivity to cluster endpoint
- Verify credentials are correct
- Check if API is enabled on cluster
"Certificate expired"
- Rotate certificates:
redisctl enterprise cluster rotate-certificates - Or update manually with new certificate
"License expired"
- Update license:
redisctl enterprise cluster update-license --data @license.json - Contact Redis support for new license
"Policy update failed"
- Some policies require cluster restart
- Check policy compatibility with cluster version
Related Commands
- Nodes - Manage cluster nodes
- Databases - Manage databases in cluster
- Users - Manage cluster users
API Reference
These commands use the following REST endpoints:
GET /v1/cluster- Get cluster infoPUT /v1/cluster- Update clusterGET /v1/cluster/policy- Get policiesPUT /v1/cluster/policy- Update policiesGET /v1/cluster/certificates- List certificatesPUT /v1/cluster/update_cert- Update certificatePOST /v1/cluster/certificates/rotate- Rotate certificates
For direct API access: redisctl api enterprise get /v1/cluster
Enterprise Databases
Manage databases (BDBs) in Redis Enterprise clusters.
Commands
List Databases
redisctl enterprise database list [OPTIONS]
Examples:
# List all databases
redisctl enterprise database list
# Table format
redisctl enterprise database list -o table
# Get names and memory
redisctl enterprise database list -q "[].{name:name,memory:memory_size,status:status}"
Get Database
redisctl enterprise database get <ID> [OPTIONS]
Examples:
# Get database details
redisctl enterprise database get 1
# Get connection info
redisctl enterprise database get 1 -q "{endpoint:endpoints[0].addr,port:port}"
Create Database
redisctl enterprise database create [OPTIONS]
Options:
| Option | Description |
|---|---|
--name <NAME> | Database name |
--memory-size <BYTES> | Memory limit in bytes |
--port <PORT> | Database port (optional, auto-assigned) |
--replication | Enable replication |
--shards-count <N> | Number of shards |
--data <JSON> | Full configuration as JSON |
Examples:
# Create with flags
redisctl enterprise database create \
--name mydb \
--memory-size 1073741824 \
--port 12000 \
--replication
# Create with JSON
redisctl enterprise database create --data '{
"name": "mydb",
"memory_size": 1073741824,
"port": 12000,
"replication": true,
"shards_count": 2
}'
# Create with modules
redisctl enterprise database create --data '{
"name": "search-db",
"memory_size": 2147483648,
"module_list": ["search"]
}'
Update Database
redisctl enterprise database update <ID> --data <JSON> [OPTIONS]
Examples:
# Increase memory
redisctl enterprise database update 1 --data '{"memory_size": 2147483648}'
# Add modules
redisctl enterprise database update 1 --data '{
"module_list": ["json", "search"]
}'
# Change eviction policy
redisctl enterprise database update 1 --data '{"eviction_policy": "volatile-lru"}'
Delete Database
redisctl enterprise database delete <ID> [OPTIONS]
Examples:
# Delete database
redisctl enterprise database delete 1
# Force delete (skip confirmation)
redisctl enterprise database delete 1 --force
Database Operations
Backup Database
redisctl enterprise database backup <ID> [OPTIONS]
Examples:
# Trigger backup
redisctl enterprise database backup 1
Import Data
redisctl enterprise database import <ID> --data <JSON>
Example:
redisctl enterprise database import 1 --data '{
"source_type": "rdb_url",
"source_url": "http://backup-server/backup.rdb"
}'
Export Data
redisctl enterprise database export <ID> --data <JSON>
Flush Database
redisctl enterprise database flush <ID>
Warning: This deletes all data in the database.
Shards
List Shards
redisctl enterprise database shards <ID>
Get Shard Stats
redisctl enterprise database shard-stats <ID>
Common Patterns
Get Connection String
ENDPOINT=$(redisctl enterprise database get 1 -q 'endpoints[0].addr[0]')
PORT=$(redisctl enterprise database get 1 -q 'port')
echo "redis://$ENDPOINT:$PORT"
Monitor Memory Usage
redisctl enterprise database list \
-q "[].{name:name,used:used_memory,limit:memory_size}" \
-o table
Bulk Update Databases
# Update all databases
for id in $(redisctl enterprise database list -q '[].uid' --raw); do
echo "Updating database $id"
redisctl enterprise database update $id --data '{"eviction_policy": "volatile-lru"}'
done
Create Database with Persistence
redisctl enterprise database create --data '{
"name": "persistent-db",
"memory_size": 1073741824,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec"
}'
Troubleshooting
"Not enough memory"
- Check cluster has available memory
- Reduce memory_size or add nodes
"Port already in use"
- Omit port to auto-assign
- Or specify a different port
"Module not found"
- Upload module first:
redisctl enterprise module upload - Check module name is correct
API Reference
REST endpoints:
GET /v1/bdbs- List databasesPOST /v1/bdbs- Create databaseGET /v1/bdbs/{id}- Get databasePUT /v1/bdbs/{id}- Update databaseDELETE /v1/bdbs/{id}- Delete database
For direct API access: redisctl api enterprise get /v1/bdbs
Enterprise Nodes
Manage nodes in Redis Enterprise clusters.
Commands
List Nodes
redisctl enterprise node list [OPTIONS]
Examples:
# List all nodes
redisctl enterprise node list
# Table format
redisctl enterprise node list -o table
# Get node IPs and status
redisctl enterprise node list -q "[].{id:uid,addr:addr,status:status}"
Get Node
redisctl enterprise node get <ID> [OPTIONS]
Examples:
# Get node details
redisctl enterprise node get 1
# Get specific fields
redisctl enterprise node get 1 -q "{addr:addr,cores:cores,memory:total_memory}"
Update Node
redisctl enterprise node update <ID> --data <JSON>
Examples:
# Update node settings
redisctl enterprise node update 1 --data '{"addr": "10.0.0.2"}'
# Set rack ID
redisctl enterprise node update 1 --data '{"rack_id": "rack-1"}'
Remove Node
redisctl enterprise node remove <ID> [OPTIONS]
Remove a node from the cluster.
Examples:
# Remove node
redisctl enterprise node remove 3
# Force remove (skip checks)
redisctl enterprise node remove 3 --force
Node Actions
Maintenance Mode
# Enter maintenance mode
redisctl enterprise node action <ID> maintenance-on
# Exit maintenance mode
redisctl enterprise node action <ID> maintenance-off
Check Node Status
redisctl enterprise node check <ID>
Node Stats
Get Node Statistics
redisctl enterprise stats node <ID> [OPTIONS]
Examples:
# Current stats
redisctl enterprise stats node 1
# Stream continuously
redisctl enterprise stats node 1 --follow
# Get CPU and memory
redisctl enterprise stats node 1 -q "{cpu:cpu_user,memory:free_memory}"
Joining Nodes
Join Node to Cluster
redisctl enterprise cluster join --data <JSON>
Example:
redisctl enterprise cluster join --data '{
"addr": "10.0.0.3",
"username": "admin@cluster.local",
"password": "password"
}'
Common Patterns
Check All Node Health
#!/bin/bash
for node in $(redisctl enterprise node list -q '[].uid' --raw); do
STATUS=$(redisctl enterprise node get $node -q 'status')
echo "Node $node: $STATUS"
done
Get Total Cluster Resources
redisctl enterprise node list -q '{
total_memory: sum([].total_memory),
total_cores: sum([].cores)
}'
Monitor Node Resources
# Watch node stats
watch -n 5 "redisctl enterprise stats node 1 -q '{cpu:cpu_user,mem:free_memory}'"
Drain Node Before Removal
# Put in maintenance mode
redisctl enterprise node action 3 maintenance-on
# Wait for shards to migrate
sleep 60
# Remove node
redisctl enterprise node remove 3
Troubleshooting
"Node not reachable"
- Check network connectivity
- Verify firewall rules allow cluster ports
- Check node services are running
"Cannot remove node"
- Ensure no master shards on node
- Put node in maintenance mode first
- Check cluster has enough resources
"Node out of memory"
- Add more nodes to cluster
- Reduce database memory allocations
- Check for memory leaks in applications
API Reference
REST endpoints:
GET /v1/nodes- List nodesGET /v1/nodes/{id}- Get nodePUT /v1/nodes/{id}- Update nodeDELETE /v1/nodes/{id}- Remove nodePOST /v1/nodes/{id}/actions/{action}- Node action
For direct API access: redisctl api enterprise get /v1/nodes
Enterprise Access Control
Manage users, roles, and LDAP integration for Redis Enterprise.
Users
List Users
redisctl enterprise user list [OPTIONS]
Examples:
# List all users
redisctl enterprise user list
# Table format
redisctl enterprise user list -o table
# Get usernames and roles
redisctl enterprise user list -q "[].{name:name,role:role,email:email}"
Get User
redisctl enterprise user get <ID> [OPTIONS]
Create User
redisctl enterprise user create --data <JSON>
Examples:
# Create admin user
redisctl enterprise user create --data '{
"name": "admin",
"email": "admin@example.com",
"password": "SecurePass123!",
"role": "admin"
}'
# Create viewer user
redisctl enterprise user create --data '{
"name": "viewer",
"email": "viewer@example.com",
"password": "ViewPass123!",
"role": "db_viewer"
}'
Update User
redisctl enterprise user update <ID> --data <JSON>
Examples:
# Change password
redisctl enterprise user update 2 --data '{"password": "NewPass123!"}'
# Update role
redisctl enterprise user update 2 --data '{"role": "db_member"}'
Delete User
redisctl enterprise user delete <ID>
Roles
List Roles
redisctl enterprise role list
Get Role
redisctl enterprise role get <ID>
Create Role
redisctl enterprise role create --data <JSON>
Example:
redisctl enterprise role create --data '{
"name": "custom-role",
"management": "db_member"
}'
Built-in Roles
| Role | Description |
|---|---|
admin | Full cluster access |
cluster_member | Cluster management, no user management |
cluster_viewer | Read-only cluster access |
db_member | Database management |
db_viewer | Read-only database access |
Redis ACLs
Manage Redis ACL rules for database-level access control.
List ACLs
redisctl enterprise acl list
Get ACL
redisctl enterprise acl get <ID>
Create ACL
redisctl enterprise acl create --data <JSON>
Examples:
# Read-only ACL
redisctl enterprise acl create --data '{
"name": "readonly",
"acl": "+@read ~*"
}'
# Write to specific keys
redisctl enterprise acl create --data '{
"name": "app-writer",
"acl": "+@all ~app:*"
}'
Common ACL Patterns
| Pattern | Description |
|---|---|
+@all ~* | Full access |
+@read ~* | Read-only |
+@write ~prefix:* | Write to prefix:* keys |
-@dangerous | Deny dangerous commands |
+get +set ~* | Only GET and SET |
LDAP Integration
Get LDAP Configuration
redisctl enterprise ldap get-config
Update LDAP Configuration
redisctl enterprise ldap update-config --data <JSON>
Example:
redisctl enterprise ldap update-config --data '{
"protocol": "ldaps",
"servers": [
{"host": "ldap.example.com", "port": 636}
],
"bind_dn": "cn=admin,dc=example,dc=com",
"bind_pass": "password",
"base_dn": "dc=example,dc=com",
"user_dn_query": "(uid=%u)"
}'
LDAP Mappings
Map LDAP groups to Redis Enterprise roles.
# List mappings
redisctl enterprise ldap list-mappings
# Create mapping
redisctl enterprise ldap create-mapping --data '{
"name": "admins-mapping",
"ldap_group_dn": "cn=admins,ou=groups,dc=example,dc=com",
"role": "admin"
}'
Examples
Set Up Service Account
# Create user for application
redisctl enterprise user create --data '{
"name": "myapp",
"email": "myapp@service.local",
"password": "ServicePass123!",
"role": "db_member"
}'
Audit User Access
# List all users with their roles
redisctl enterprise user list \
-q "[].{name:name,email:email,role:role,auth_method:auth_method}" \
-o table
Rotate All Passwords
for user in $(redisctl enterprise user list -q '[].uid' --raw); do
NEW_PASS=$(openssl rand -base64 16)
redisctl enterprise user update $user --data "{\"password\": \"$NEW_PASS\"}"
echo "User $user: $NEW_PASS"
done
Troubleshooting
"Authentication failed"
- Check username/password
- Verify user exists:
redisctl enterprise user list - Check user role has required permissions
"LDAP connection failed"
- Verify LDAP server is reachable
- Check bind credentials
- Verify SSL certificates for LDAPS
"ACL denied"
- Check ACL rules:
redisctl enterprise acl get <id> - Verify user is associated with correct ACL
API Reference
REST endpoints:
GET/POST /v1/users- User managementGET/POST /v1/roles- Role managementGET/POST /v1/redis_acls- Redis ACL managementGET/PUT /v1/cluster/ldap- LDAP configurationGET/POST /v1/ldap_mappings- LDAP mappings
For direct API access: redisctl api enterprise get /v1/users
Enterprise Monitoring
Statistics, logs, and alerts for Redis Enterprise clusters.
Statistics
Cluster Stats
redisctl enterprise stats cluster [OPTIONS]
Options:
| Option | Description |
|---|---|
--follow / -f | Stream stats continuously |
--poll-interval <SECS> | Polling interval (default: 5) |
Examples:
# Current cluster stats
redisctl enterprise stats cluster
# Stream continuously
redisctl enterprise stats cluster --follow
# Stream every 2 seconds
redisctl enterprise stats cluster --follow --poll-interval 2
# Get specific metrics
redisctl enterprise stats cluster -q "{cpu:cpu_user,memory:free_memory}"
Database Stats
redisctl enterprise stats database <ID> [OPTIONS]
Examples:
# Database stats
redisctl enterprise stats database 1
# Stream database stats
redisctl enterprise stats database 1 --follow
# Get ops/sec and memory
redisctl enterprise stats database 1 -q "{ops:total_req,memory:used_memory}"
Node Stats
redisctl enterprise stats node <ID> [OPTIONS]
Examples:
# Node stats
redisctl enterprise stats node 1
# Stream node stats
redisctl enterprise stats node 1 --follow
Logs
List Logs
redisctl enterprise logs list [OPTIONS]
Options:
| Option | Description |
|---|---|
--limit <N> | Number of entries |
--offset <N> | Skip entries |
--order <ASC/DESC> | Sort order |
--since <TIME> | Start time |
--until <TIME> | End time |
Examples:
# Recent logs
redisctl enterprise logs list --limit 100
# Logs from specific time
redisctl enterprise logs list --since "2024-01-01T00:00:00Z"
# Filter by severity
redisctl enterprise logs list -q "[?severity=='ERROR']"
Alerts
List Alerts
redisctl enterprise alerts list [OPTIONS]
Examples:
# All active alerts
redisctl enterprise alerts list
# Filter by state
redisctl enterprise alerts list -q "[?state=='active']"
# Get alert summary
redisctl enterprise alerts list -q "[].{type:type,state:state,severity:severity}" -o table
Get Alert
redisctl enterprise alerts get <ID>
Clear Alert
redisctl enterprise alerts clear <ID>
Alert Settings
# Get alert settings
redisctl enterprise alerts get-settings
# Update alert settings
redisctl enterprise alerts update-settings --data '{
"cluster_certs_about_to_expire": {
"enabled": true,
"threshold": 30
}
}'
Common Alert Types
| Alert | Description |
|---|---|
node_failed | Node unreachable |
node_memory | Node memory threshold |
bdb_size | Database size threshold |
cluster_certs_about_to_expire | Certificate expiration |
license_about_to_expire | License expiration |
Usage Reports
Generate Usage Report
redisctl enterprise usage-report generate [OPTIONS]
Options:
| Option | Description |
|---|---|
--start <DATE> | Report start date |
--end <DATE> | Report end date |
Examples:
# Generate monthly report
redisctl enterprise usage-report generate \
--start 2024-01-01 \
--end 2024-01-31
Common Patterns
Health Check Dashboard
#!/bin/bash
echo "=== Cluster Health ==="
# Cluster stats
echo "Cluster:"
redisctl enterprise stats cluster -q "{cpu: cpu_user, memory: free_memory}"
# Node status
echo "Nodes:"
redisctl enterprise node list -q "[].{id: uid, status: status}" -o table
# Active alerts
echo "Alerts:"
redisctl enterprise alerts list -q "[?state=='active'].{type: type, severity: severity}" -o table
Monitor Database Performance
# Watch database ops/sec
watch -n 5 "redisctl enterprise stats database 1 -q '{ops:total_req,latency:avg_latency}'"
Export Metrics for Grafana
# Export to JSON for external monitoring
while true; do
redisctl enterprise stats cluster > /var/metrics/cluster-$(date +%s).json
sleep 60
done
Alert on Thresholds
#!/bin/bash
MEMORY=$(redisctl enterprise stats cluster -q 'used_memory')
THRESHOLD=80000000000 # 80GB
if [ "$MEMORY" -gt "$THRESHOLD" ]; then
echo "ALERT: Memory usage high: $MEMORY bytes"
# Send notification
fi
Troubleshooting
Stats Not Updating
- Check cluster connectivity
- Verify stats collection is enabled
- Check node health
Missing Logs
- Adjust time range with
--since/--until - Increase
--limit - Check log retention settings
Alert Not Clearing
- Resolve underlying issue first
- Use
redisctl enterprise alerts clear - Check alert isn't recurring
API Reference
REST endpoints:
GET /v1/cluster/stats/last- Cluster statsGET /v1/bdbs/{id}/stats/last- Database statsGET /v1/nodes/{id}/stats/last- Node statsGET /v1/logs- LogsGET /v1/cluster/alerts- Alerts
For direct API access: redisctl api enterprise get /v1/cluster/stats/last
Enterprise Active-Active
Manage Active-Active (CRDB) databases for geo-distributed deployments.
Overview
Active-Active databases replicate data across multiple clusters with conflict-free resolution. Each cluster can handle local reads and writes with automatic synchronization.
CRDB Commands
List CRDBs
redisctl enterprise crdb list [OPTIONS]
Examples:
# List all CRDBs
redisctl enterprise crdb list
# Table format
redisctl enterprise crdb list -o table
# Get names and status
redisctl enterprise crdb list -q "[].{name:name,guid:guid,status:status}"
Get CRDB
redisctl enterprise crdb get <GUID> [OPTIONS]
Examples:
# Get CRDB details
redisctl enterprise crdb get abc-123-def
# Get instances
redisctl enterprise crdb get abc-123-def -q 'instances'
Create CRDB
redisctl enterprise crdb create --data <JSON>
Example Payload:
{
"name": "global-cache",
"memory_size": 1073741824,
"port": 12000,
"instances": [
{
"cluster": {
"url": "https://cluster1.example.com:9443",
"credentials": {
"username": "admin@cluster.local",
"password": "password1"
}
}
},
{
"cluster": {
"url": "https://cluster2.example.com:9443",
"credentials": {
"username": "admin@cluster.local",
"password": "password2"
}
}
}
]
}
Examples:
# Create from file
redisctl enterprise crdb create --data @crdb.json
# Simple two-cluster setup
redisctl enterprise crdb create --data '{
"name": "geo-cache",
"memory_size": 1073741824,
"instances": [...]
}'
Update CRDB
redisctl enterprise crdb update <GUID> --data <JSON>
Examples:
# Increase memory
redisctl enterprise crdb update abc-123 --data '{"memory_size": 2147483648}'
# Add instance
redisctl enterprise crdb update abc-123 --data '{
"instances": [
{"cluster": {"url": "https://cluster3.example.com:9443", ...}}
]
}'
Delete CRDB
redisctl enterprise crdb delete <GUID>
CRDB Tasks
Active-Active operations are asynchronous and managed through tasks.
List CRDB Tasks
redisctl enterprise crdb-task list [OPTIONS]
Examples:
# All tasks
redisctl enterprise crdb-task list
# Tasks for specific CRDB
redisctl enterprise crdb-task list --crdb-guid abc-123
# Filter by status
redisctl enterprise crdb-task list -q "[?status=='completed']"
Get CRDB Task
redisctl enterprise crdb-task get <TASK-ID>
Example:
# Check task status
redisctl enterprise crdb-task get task-123 -q '{status:status,progress:progress}'
Cancel CRDB Task
redisctl enterprise crdb-task cancel <TASK-ID>
Instance Management
Get Instance Status
redisctl enterprise crdb get <GUID> -q 'instances[].{cluster:cluster.name,status:status}'
Add Instance to CRDB
redisctl enterprise crdb update <GUID> --data '{
"add_instances": [{
"cluster": {
"url": "https://new-cluster:9443",
"credentials": {...}
}
}]
}'
Remove Instance from CRDB
redisctl enterprise crdb update <GUID> --data '{
"remove_instances": ["instance-id"]
}'
Common Patterns
Create Two-Region Active-Active
#!/bin/bash
# Create CRDB across two regions
cat > crdb-config.json << 'EOF'
{
"name": "global-sessions",
"memory_size": 2147483648,
"port": 12000,
"causal_consistency": false,
"encryption": true,
"instances": [
{
"cluster": {
"url": "https://us-east.example.com:9443",
"credentials": {
"username": "admin@cluster.local",
"password": "$US_EAST_PASSWORD"
}
}
},
{
"cluster": {
"url": "https://eu-west.example.com:9443",
"credentials": {
"username": "admin@cluster.local",
"password": "$EU_WEST_PASSWORD"
}
}
}
]
}
EOF
envsubst < crdb-config.json | redisctl enterprise crdb create --data @-
Monitor CRDB Sync Status
# Check all instances are synced
redisctl enterprise crdb get abc-123 \
-q 'instances[].{cluster:cluster.name,lag:sync_lag}' \
-o table
Wait for CRDB Task
TASK_ID=$(redisctl enterprise crdb create --data @crdb.json -q 'task_id')
while true; do
STATUS=$(redisctl enterprise crdb-task get $TASK_ID -q 'status')
echo "Status: $STATUS"
case $STATUS in
"completed") echo "Success!"; break ;;
"failed") echo "Failed!"; exit 1 ;;
*) sleep 10 ;;
esac
done
Conflict Resolution
Active-Active uses CRDTs (Conflict-free Replicated Data Types) for automatic conflict resolution:
| Data Type | Resolution |
|---|---|
| Strings | Last-write-wins |
| Counters | Add/remove operations merge |
| Sets | Union of all operations |
| Sorted Sets | Union with max score |
Troubleshooting
"Instance not reachable"
- Check network connectivity between clusters
- Verify firewall allows CRDB ports
- Check cluster credentials
"Sync lag increasing"
- Check network latency between clusters
- Verify cluster resources (CPU, memory)
- Check for large write volumes
"Task stuck"
- Check all instances are healthy
- Cancel and retry:
redisctl enterprise crdb-task cancel - Check cluster logs for errors
"Conflict resolution issues"
- Review data types being used
- Consider causal consistency if needed
- Check application logic for proper CRDT usage
API Reference
REST endpoints:
GET /v1/crdbs- List CRDBsPOST /v1/crdbs- Create CRDBGET /v1/crdbs/{guid}- Get CRDBPUT /v1/crdbs/{guid}- Update CRDBDELETE /v1/crdbs/{guid}- Delete CRDBGET /v1/crdb_tasks- List tasksGET /v1/crdb_tasks/{id}- Get task
For direct API access: redisctl api enterprise get /v1/crdbs
Enterprise Workflows
Multi-step operations that orchestrate multiple API calls with automatic handling.
Available Workflows
init-cluster
Initialize a new Redis Enterprise cluster from scratch.
redisctl enterprise workflow init-cluster \
--cluster-name production \
--username admin@cluster.local \
--password SecurePass123!
What it does:
- Bootstraps the cluster
- Sets up authentication
- Applies initial configuration
- Optionally creates default database
- Returns cluster details
Options:
| Option | Description |
|---|---|
--cluster-name | Cluster name |
--username | Admin username |
--password | Admin password |
--license-file | Path to license file |
--create-database | Create initial database |
--database-name | Name for initial database |
Full Example:
redisctl enterprise workflow init-cluster \
--cluster-name production \
--username admin@cluster.local \
--password SecurePass123! \
--license-file license.txt \
--create-database \
--database-name default-cache
When to Use Workflows
Use workflows when you need to:
- Set up a new cluster from scratch
- Perform multiple related operations
- Have automatic error handling
Use individual commands when you need:
- Fine-grained control
- Custom sequencing
- Partial operations
Custom Workflows
For operations not covered by built-in workflows, script them:
Add Node and Rebalance
#!/bin/bash
set -e
NEW_NODE="10.0.0.4"
# Join node to cluster
redisctl enterprise cluster join --data "{
\"addr\": \"$NEW_NODE\",
\"username\": \"admin@cluster.local\",
\"password\": \"$PASSWORD\"
}"
echo "Node joined, waiting for sync..."
sleep 30
# Verify node is active
STATUS=$(redisctl enterprise node list -q "[?addr[0]=='$NEW_NODE'].status | [0]")
if [ "$STATUS" != "active" ]; then
echo "Node not active: $STATUS"
exit 1
fi
echo "Node $NEW_NODE successfully added"
Database Migration
#!/bin/bash
set -e
SOURCE_DB=$1
TARGET_MEMORY=$2
# Create new database
NEW_DB=$(redisctl enterprise database create \
--name "migrated-$(date +%s)" \
--memory-size $TARGET_MEMORY \
-q 'uid')
echo "Created database: $NEW_DB"
# Export from source
redisctl enterprise database export $SOURCE_DB --data '{
"export_type": "rdb",
"path": "/tmp/export.rdb"
}'
# Import to target
redisctl enterprise database import $NEW_DB --data '{
"source_type": "rdb_file",
"source_path": "/tmp/export.rdb"
}'
echo "Migration complete: $SOURCE_DB -> $NEW_DB"
Rolling Restart
#!/bin/bash
set -e
# Get all nodes
for node in $(redisctl enterprise node list -q '[].uid' --raw); do
echo "Restarting node $node..."
# Put in maintenance
redisctl enterprise node action $node maintenance-on
sleep 30
# Restart services (via SSH or other method)
# ssh node$node "supervisorctl restart all"
# Exit maintenance
redisctl enterprise node action $node maintenance-off
sleep 30
# Verify healthy
STATUS=$(redisctl enterprise node get $node -q 'status')
if [ "$STATUS" != "active" ]; then
echo "Node $node not healthy after restart"
exit 1
fi
echo "Node $node restarted successfully"
done
echo "Rolling restart complete"
Error Handling
Workflows handle errors gracefully:
# If workflow fails midway
$ redisctl enterprise workflow init-cluster --cluster-name prod ...
Error: Failed at step 3: License invalid
# Check what was created
$ redisctl enterprise cluster get
# Cluster exists but not fully configured
# Resume manually or clean up
Comparison: Workflow vs Manual
With workflow:
redisctl enterprise workflow init-cluster \
--cluster-name prod \
--username admin@cluster.local \
--password Pass123!
Manual equivalent:
# 1. Bootstrap cluster
redisctl api enterprise post /v1/bootstrap/create_cluster -d '{...}'
# 2. Wait for bootstrap
while [ "$(redisctl api enterprise get /v1/bootstrap -q 'status')" != "completed" ]; do
sleep 5
done
# 3. Set credentials
redisctl api enterprise put /v1/cluster -d '{...}'
# 4. Apply license
redisctl enterprise license update --data @license.json
# 5. Create initial database
redisctl enterprise database create --name default ...
The workflow handles all sequencing, waiting, and error checking automatically.
Enterprise Operations
Special tools for Redis Enterprise cluster management and support.
Overview
These commands handle operational tasks beyond day-to-day database management:
- Support Package - Generate diagnostic packages for Redis Support
- License Management - View, update, and validate licenses
- Debug Info - Detailed cluster diagnostics
- Diagnostics - Run health checks
- Migrations - Data migrations between databases
Quick Reference
Support Package
# Generate and download
redisctl enterprise support-package cluster
# With optimization (smaller size)
redisctl enterprise support-package cluster --optimize
# Upload directly to Redis Support
redisctl enterprise support-package cluster --upload
License
# View current license
redisctl enterprise license get
# Update license
redisctl enterprise license update --file license.txt
Debug Info
# Full cluster debug info
redisctl enterprise debuginfo cluster
# Specific node
redisctl enterprise debuginfo node 1
Diagnostics
# Run all checks
redisctl enterprise diagnostics run
# List available checks
redisctl enterprise diagnostics list-checks
Migrations
# Create migration
redisctl enterprise migration create --source 1 --target 2
# Start migration
redisctl enterprise migration start <ID>
# Check status
redisctl enterprise migration get <ID>
Support Package Commands (Phase 2)
Enhanced support package generation with improved UX, async operations, and intelligent defaults.
Overview
The support-package command group provides a dedicated, user-friendly interface for generating Redis Enterprise support packages. This is the recommended way to collect diagnostic information for Redis Support tickets.
Why Use Support Package Commands?
While debug-info commands provide the core functionality, support-package commands offer:
- Better UX: Clear progress indicators and helpful output
- Smart defaults: Automatic timestamps and intelligent file naming
- Pre-flight checks: Disk space and permission verification
- Async support: Handle long-running operations gracefully
- Next steps: Clear guidance on uploading to support
Available Commands
Generate Cluster Support Package
# Quick generation with all defaults
redisctl enterprise support-package cluster
# Custom output location
redisctl enterprise support-package cluster -o /tmp/support.tar.gz
# Skip pre-flight checks (not recommended)
redisctl enterprise support-package cluster --skip-checks
# Use new API endpoints (Redis Enterprise 7.4+)
redisctl enterprise support-package cluster --use-new-api
# Optimize package size (reduces by ~20-30%)
redisctl enterprise support-package cluster --optimize
# Show optimization details
redisctl enterprise support-package cluster --optimize --optimize-verbose
# Upload directly to Redis Support (Files.com)
export REDIS_ENTERPRISE_FILES_API_KEY="your-api-key"
redisctl enterprise support-package cluster --upload
# Upload without saving locally
redisctl enterprise support-package cluster --upload --no-save
# Optimize and upload in one command
redisctl enterprise support-package cluster --optimize --upload --no-save
Example Output:
Redis Enterprise Support Package
================================
Cluster: prod-cluster-01
Version: 7.2.4
Nodes: 3
Databases: 5
Output: ./support-package-cluster-20240115T143000.tar.gz
Generating support package...
⠋ Collecting cluster data...
✓ Support package created successfully
File: support-package-cluster-20240115T143000.tar.gz
Size: 487.3 MB
Time: 154s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Generate Database Support Package
# Support package for specific database
redisctl enterprise support-package database 1
# Custom output with database name
redisctl enterprise support-package database 1 \
-o production-db-issue.tar.gz
# For Active-Active database
redisctl enterprise support-package database 5 --use-new-api
Example Output:
Redis Enterprise Support Package
================================
Database: 1
Name: production-cache
Output: ./support-package-database-1-20240115T143000.tar.gz
Generating support package...
⠋ Collecting database 1 data...
✓ Database support package created successfully
File: support-package-database-1-20240115T143000.tar.gz
Size: 125.7 MB
Time: 45s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Generate Node Support Package
# All nodes
redisctl enterprise support-package node
# Specific node
redisctl enterprise support-package node 2
# Custom output for node issue
redisctl enterprise support-package node 2 \
-o node2-memory-issue.tar.gz
Example Output:
Redis Enterprise Support Package
================================
Node: 2
Address: 10.0.1.2
Output: ./support-package-node-2-20240115T143000.tar.gz
Generating support package...
⠋ Collecting node 2 data...
✓ Node support package created successfully
File: support-package-node-2-20240115T143000.tar.gz
Size: 89.3 MB
Time: 32s
Next steps:
1. Upload to Redis Support: https://support.redis.com/upload
2. Reference your case number when uploading
3. Delete local file after upload to free space
Package Optimization
Support packages can be large (500MB-2GB+). The --optimize flag reduces package size by 20-30% through:
- Log truncation: Keeps most recent 1000 lines per log file (configurable)
- Redundant data removal: Removes duplicate or unnecessary files
- Nested archive cleanup: Removes nested .gz files
Basic Optimization
# Optimize with defaults
redisctl enterprise support-package cluster --optimize
# Customize log retention
redisctl enterprise support-package cluster --optimize --log-lines 5000
# Show detailed optimization stats
redisctl enterprise support-package cluster --optimize --optimize-verbose
Optimization Output
Optimization: 487.3 MB → 358.2 MB (26.5% reduction)
Files processed: 847
Files truncated: 142
Files removed: 23
When to Use Optimization
Use optimization when:
- Package size exceeds upload limits
- Network bandwidth is limited
- Storage space is constrained
- Only recent log data is needed
Skip optimization when:
- Full historical logs are needed for issue diagnosis
- Investigating intermittent issues from the past
- Redis Support specifically requests unoptimized packages
Direct Upload to Redis Support
Upload support packages directly to Files.com for Redis Support tickets, eliminating manual upload steps.
Setup Files.com API Key
Get your Files.com API key from Redis Support, then configure it:
# Option 1: Environment variable (recommended for CI/CD)
export REDIS_ENTERPRISE_FILES_API_KEY="your-api-key"
# Option 2: Secure keyring storage (requires secure-storage feature)
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --use-keyring
# Option 3: Global config file (plaintext)
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --global
# Option 4: Per-profile config
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --profile enterprise-prod
Upload Commands
# Generate and upload
redisctl enterprise support-package cluster --upload
# Upload without local copy (saves disk space)
redisctl enterprise support-package cluster --upload --no-save
# Optimize before upload (recommended)
redisctl enterprise support-package cluster --optimize --upload --no-save
# Database-specific package
redisctl enterprise support-package database 1 --optimize --upload
Upload Output
Generating support package...
Uploading to Files.com: /RLEC_Customers/Uploads/support-package-cluster-20240115T143000.tar.gz
Size: 358234567 bytes
✓ Support package created successfully
Uploaded to: RLEC_Customers/Uploads/support-package-cluster-20240115T143000.tar.gz
Size: 341.7 MB
Time: 124s
API Key Priority
The Files.com API key is resolved in this order:
REDIS_ENTERPRISE_FILES_API_KEYenvironment variable- Profile-specific
files_api_keyin config - Global
files_api_keyin config - System keyring (if secure-storage feature enabled)
REDIS_FILES_API_KEYenvironment variable (fallback)
Secure API Key Storage
With the secure-storage feature, API keys are stored in your OS keyring:
- macOS: Keychain
- Windows: Credential Manager
- Linux: Secret Service (GNOME Keyring, KWallet)
# Install with secure storage
cargo install redisctl --features secure-storage
# Store key securely
redisctl files-key set "$REDIS_ENTERPRISE_FILES_API_KEY" --use-keyring
# Verify storage
redisctl files-key get
# Output: Key found in keyring: your-ke...key4
# Remove when no longer needed
redisctl files-key remove --keyring
The config file only stores a reference:
files_api_key = "keyring:files-api-key"
Pre-flight Checks
The command automatically performs safety checks before generating packages:
Disk Space Check
Warning: Low disk space detected (< 1GB available)
Continue anyway? (y/N):
File Overwrite Protection
Warning: File support-package.tar.gz already exists
Overwrite? (y/N):
Permission Verification
Error: Cannot write to directory /restricted/path
Please choose a different location or check permissions
To skip all checks (not recommended for production):
redisctl enterprise support-package cluster --skip-checks
Async Operations
For large clusters, support package generation can take several minutes:
With Wait (Default)
# Wait for completion with default timeout (10 minutes)
redisctl enterprise support-package cluster --wait
# Custom timeout (30 minutes for very large clusters)
redisctl enterprise support-package cluster --wait --wait-timeout 1800
Without Wait
# Start generation and return immediately
redisctl enterprise support-package cluster --no-wait
# Output:
# Task ID: abc123-def456-789
# Check status: redisctl enterprise support-package status abc123-def456-789
Check Status
redisctl enterprise support-package status abc123-def456-789
# Output:
# Support Package Generation Status
# =================================
# Task ID: abc123-def456-789
# Status: in_progress
# Progress: 65%
# Message: Collecting node 3 data...
List Available Packages
redisctl enterprise support-package list
Note: Most Redis Enterprise versions don't store generated packages on the server. This command is a placeholder for future functionality.
Smart File Naming
The command uses intelligent defaults for file names:
| Type | Pattern | Example |
|---|---|---|
| Cluster | support-package-cluster-{timestamp}.tar.gz | support-package-cluster-20240115T143000.tar.gz |
| Database | support-package-database-{uid}-{timestamp}.tar.gz | support-package-database-1-20240115T143000.tar.gz |
| Node | support-package-node-{uid}-{timestamp}.tar.gz | support-package-node-2-20240115T143000.tar.gz |
| All Nodes | support-package-nodes-{timestamp}.tar.gz | support-package-nodes-20240115T143000.tar.gz |
Timestamps use ISO format for easy sorting: YYYYMMDDTHHMMSS
Best Practices
1. Organized Collection
#!/bin/bash
# Create case-specific directory
CASE_ID="CASE-12345"
mkdir -p "./support-$CASE_ID"
# Collect all relevant packages
redisctl enterprise support-package cluster \
-o "./support-$CASE_ID/cluster.tar.gz"
redisctl enterprise support-package database 1 \
-o "./support-$CASE_ID/database-1.tar.gz"
# Create summary
echo "Case: $CASE_ID" > "./support-$CASE_ID/README.txt"
echo "Issue: Database 1 high latency" >> "./support-$CASE_ID/README.txt"
echo "Collected: $(date)" >> "./support-$CASE_ID/README.txt"
2. Automated Daily Collection
#!/bin/bash
# Daily support package collection for monitoring
OUTPUT_DIR="/backup/support-packages"
RETENTION_DAYS=7
# Generate with date-based naming
redisctl enterprise support-package cluster \
-o "$OUTPUT_DIR/daily-$(date +%Y%m%d).tar.gz"
# Clean up old packages
find "$OUTPUT_DIR" -name "daily-*.tar.gz" \
-mtime +$RETENTION_DAYS -delete
3. Pre-incident Collection
# Collect baseline before maintenance
redisctl enterprise support-package cluster \
-o "baseline-pre-upgrade-$(date +%Y%m%d).tar.gz"
# Perform upgrade...
# Collect post-change package
redisctl enterprise support-package cluster \
-o "post-upgrade-$(date +%Y%m%d).tar.gz"
Integration with Support Workflow
1. Generate Package
redisctl enterprise support-package cluster
2. Verify Package
# Check file size and type
ls -lh support-package-*.tar.gz
file support-package-*.tar.gz
# Quick content verification
tar -tzf support-package-*.tar.gz | head -20
3. Upload to Support
- Navigate to https://support.redis.com/upload
- Select your case number
- Upload the tar.gz file directly
- Add description of the issue
4. Clean Up
# Remove local copy after successful upload
rm support-package-*.tar.gz
Troubleshooting
Package Generation Fails
# Check cluster connectivity
redisctl enterprise cluster get
# Verify credentials
redisctl profile list
# Try with explicit credentials
export REDIS_ENTERPRISE_URL="https://your-cluster:9443"
export REDIS_ENTERPRISE_USER="your-user"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true"
Timeout Issues
# Increase timeout for large clusters
redisctl enterprise support-package cluster \
--wait --wait-timeout 3600 # 1 hour
Permission Denied
# Use a writable directory
redisctl enterprise support-package cluster \
-o /tmp/support.tar.gz
# Or fix permissions
chmod 755 ./output-directory
Comparison with debug-info
| Feature | debug-info | support-package |
|---|---|---|
| Binary download | ✅ | ✅ |
| Progress indicators | ✅ | ✅ Enhanced |
| Pre-flight checks | ❌ | ✅ |
| Smart naming | Basic | Advanced |
| Async operations | ❌ | ✅ |
| Status checking | ❌ | ✅ |
| Clear next steps | ❌ | ✅ |
| Cluster info display | ❌ | ✅ |
CI/CD Integration with JSON Output
The support-package commands fully support structured JSON output for automation and CI/CD pipelines.
Basic JSON Output
# Generate package with JSON output
redisctl enterprise support-package cluster -o json
# Output:
{
"success": true,
"package_type": "cluster",
"file_path": "support-package-cluster-20240115T143000.tar.gz",
"file_size": 510234567,
"file_size_display": "487.3 MB",
"elapsed_seconds": 154,
"cluster_name": "prod-cluster-01",
"cluster_version": "7.2.4-92",
"message": "Support package created successfully",
"timestamp": "2024-01-15T14:32:34Z"
}
CI/CD Script Examples
Automated Collection on Failure
#!/bin/bash
# collect-support-on-failure.sh
# Run tests
if ! ./run-tests.sh; then
echo "Tests failed, collecting support package..."
# Generate support package with JSON output
result=$(redisctl enterprise support-package cluster -o json)
# Check if successful
if [ $(echo "$result" | jq -r '.success') = "true" ]; then
file_path=$(echo "$result" | jq -r '.file_path')
file_size=$(echo "$result" | jq -r '.file_size_display')
echo "Support package created: $file_path ($file_size)"
# Upload to artifact storage
aws s3 cp "$file_path" "s3://support-packages/$(date +%Y%m%d)/"
# Create support ticket
curl -X POST https://support.redis.com/api/tickets \
-H "Authorization: Bearer $SUPPORT_TOKEN" \
-d @- <<EOF
{
"title": "CI Test Failure - $(date)",
"priority": "high",
"attachment": "$file_path",
"metadata": $(echo "$result" | jq -c .)
}
EOF
# Clean up local file
rm "$file_path"
else
echo "Failed to create support package"
echo "$result" | jq -r '.error'
exit 1
fi
fi
GitHub Actions Integration
name: Support Package Collection
on:
workflow_dispatch:
schedule:
- cron: '0 0 * * 0' # Weekly on Sunday
jobs:
collect-support:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install redisctl
run: |
curl -L https://github.com/redis-developer/redisctl/releases/latest/download/redisctl-linux-amd64.tar.gz | tar xz
sudo mv redisctl /usr/local/bin/
- name: Configure Redis Enterprise credentials
run: |
redisctl profile set enterprise \
--deployment enterprise \
--url ${{ secrets.REDIS_ENTERPRISE_URL }} \
--username ${{ secrets.REDIS_ENTERPRISE_USER }} \
--password ${{ secrets.REDIS_ENTERPRISE_PASSWORD }} \
--insecure
- name: Collect support package
id: support
run: |
# Generate package with JSON output
OUTPUT=$(redisctl enterprise support-package cluster -o json)
echo "$OUTPUT" > support-result.json
# Extract key fields
SUCCESS=$(echo "$OUTPUT" | jq -r '.success')
FILE_PATH=$(echo "$OUTPUT" | jq -r '.file_path')
FILE_SIZE=$(echo "$OUTPUT" | jq -r '.file_size_display')
# Set outputs for next steps
echo "success=$SUCCESS" >> $GITHUB_OUTPUT
echo "file_path=$FILE_PATH" >> $GITHUB_OUTPUT
echo "file_size=$FILE_SIZE" >> $GITHUB_OUTPUT
- name: Upload artifact
if: steps.support.outputs.success == 'true'
uses: actions/upload-artifact@v4
with:
name: support-package-${{ github.run_id }}
path: ${{ steps.support.outputs.file_path }}
retention-days: 30
- name: Create issue on large package
if: steps.support.outputs.success == 'true'
run: |
FILE_SIZE_BYTES=$(jq -r '.file_size' support-result.json)
# If package is over 1GB, create an issue
if [ "$FILE_SIZE_BYTES" -gt 1073741824 ]; then
gh issue create \
--title "Large support package detected" \
--body "Support package size: ${{ steps.support.outputs.file_size }}" \
--label monitoring
fi
Jenkins Pipeline
pipeline {
agent any
stages {
stage('Health Check') {
steps {
script {
def clusterHealth = sh(
script: 'redisctl enterprise cluster get -o json',
returnStdout: true
).trim()
def health = readJSON text: clusterHealth
if (health.data.state != 'active') {
echo "Cluster unhealthy, generating support package..."
def supportResult = sh(
script: 'redisctl enterprise support-package cluster -o json',
returnStdout: true
).trim()
def support = readJSON text: supportResult
if (support.success) {
archiveArtifacts artifacts: support.file_path
// Send notification
emailext (
subject: "Redis Cluster Issue - Support Package Generated",
body: """
Cluster State: ${health.data.state}
Support Package: ${support.file_path}
Size: ${support.file_size_display}
Generated at: ${support.timestamp}
""",
to: 'ops-team@company.com'
)
}
}
}
}
}
}
}
Terraform Integration
# Generate support package before infrastructure changes
resource "null_resource" "pre_change_support" {
provisioner "local-exec" {
command = <<-EOT
# Generate support package and capture output
OUTPUT=$(redisctl enterprise support-package cluster -o json)
# Save to state bucket
if [ $(echo "$OUTPUT" | jq -r '.success') = "true" ]; then
FILE=$(echo "$OUTPUT" | jq -r '.file_path')
aws s3 cp "$FILE" "s3://terraform-state/support-packages/pre-${timestamp()}/"
fi
EOT
}
triggers = {
always_run = timestamp()
}
}
Parsing JSON Output in Different Languages
Python
import json
import subprocess
# Generate support package
result = subprocess.run(
['redisctl', 'enterprise', 'support-package', 'cluster', '-o', 'json'],
capture_output=True,
text=True
)
# Parse JSON output
data = json.loads(result.stdout)
if data['success']:
print(f"Package created: {data['file_path']}")
print(f"Size: {data['file_size_display']}")
print(f"Time taken: {data['elapsed_seconds']} seconds")
# Upload to monitoring system
metrics.send('support_package.size', data['file_size'])
metrics.send('support_package.generation_time', data['elapsed_seconds'])
else:
print(f"Error: {data.get('error', 'Unknown error')}")
Node.js
const { exec } = require('child_process');
const fs = require('fs');
// Generate support package
exec('redisctl enterprise support-package cluster -o json', (error, stdout, stderr) => {
if (error) {
console.error(`Error: ${error.message}`);
return;
}
const result = JSON.parse(stdout);
if (result.success) {
console.log(`Package created: ${result.file_path}`);
console.log(`Size: ${result.file_size_display}`);
// Upload to cloud storage
uploadToS3(result.file_path).then(() => {
// Clean up local file
fs.unlinkSync(result.file_path);
});
}
});
Monitoring and Alerting
#!/bin/bash
# monitor-support-package.sh
# Generate package and check size
result=$(redisctl enterprise support-package cluster -o json)
if [ $(echo "$result" | jq -r '.success') = "true" ]; then
size_bytes=$(echo "$result" | jq -r '.file_size')
elapsed=$(echo "$result" | jq -r '.elapsed_seconds')
# Send metrics to monitoring system
curl -X POST http://metrics.internal/api/v1/metrics \
-H "Content-Type: application/json" \
-d @- <<EOF
{
"metrics": [
{
"name": "redis.support_package.size_bytes",
"value": $size_bytes,
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
},
{
"name": "redis.support_package.generation_seconds",
"value": $elapsed,
"timestamp": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
]
}
EOF
# Alert if package is too large
if [ "$size_bytes" -gt 2147483648 ]; then # 2GB
curl -X POST http://alerts.internal/api/v1/alert \
-H "Content-Type: application/json" \
-d "{\"severity\": \"warning\", \"message\": \"Large support package: $(echo "$result" | jq -r '.file_size_display')\"}"
fi
fi
Related Commands
- Debug Info Commands - Lower-level diagnostic collection
- Logs Commands - View logs without full package
- Cluster Commands - Check cluster health
- Database Commands - Database management
License Management Commands
Manage Redis Enterprise licenses with comprehensive tools for compliance monitoring, multi-instance management, and automated workflows.
Overview
The license commands provide powerful capabilities for managing Redis Enterprise licenses:
- View and update license information
- Monitor expiration across multiple instances
- Generate compliance reports
- Bulk license updates across deployments
- Automated monitoring and alerting
Core License Commands
Get License Information
# Get full license details
redisctl enterprise license get
# Get specific fields with JMESPath
redisctl enterprise license get -q 'expiration_date'
redisctl enterprise license get -q '{name: cluster_name, expires: expiration_date}'
Update License
# Update with JSON data
redisctl enterprise license update --data '{
"license": "YOUR_LICENSE_KEY_HERE"
}'
# Update from file
redisctl enterprise license update --data @new-license.json
# Update from stdin
echo '{"license": "..."}' | redisctl enterprise license update --data -
Upload License File
# Upload a license file directly
redisctl enterprise license upload --file /path/to/license.txt
# Supports both raw license text and JSON format
redisctl enterprise license upload --file license.json
Validate License
# Validate license before applying
redisctl enterprise license validate --data @license.json
# Validate from stdin
cat license.txt | redisctl enterprise license validate --data -
Check License Expiration
# Get expiration information
redisctl enterprise license expiry
# Check if expiring soon
redisctl enterprise license expiry -q 'warning'
# Get days remaining
redisctl enterprise license expiry -q 'days_remaining'
View Licensed Features
# List all licensed features
redisctl enterprise license features
# Check specific features
redisctl enterprise license features -q 'flash_enabled'
redisctl enterprise license features -q 'modules'
License Usage Report
# Get current usage vs limits
redisctl enterprise license usage
# Get RAM usage
redisctl enterprise license usage -q 'ram'
# Check shard availability
redisctl enterprise license usage -q 'shards.available'
Multi-Instance License Workflows
License Audit Across All Profiles
# Audit all configured Redis Enterprise instances
redisctl enterprise workflow license audit
# Show only expiring licenses (within 30 days)
redisctl enterprise workflow license audit --expiring
# Show only expired licenses
redisctl enterprise workflow license audit --expired
# Export as JSON for processing
redisctl enterprise workflow license audit -o json > license-audit.json
Bulk License Updates
# Update license across all enterprise profiles
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json
# Update specific profiles
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west,staging" \
--data @new-license.json
# Dry run to see what would be updated
redisctl enterprise workflow license bulk-update \
--profiles all \
--data @new-license.json \
--dry-run
License Compliance Report
# Generate comprehensive compliance report
redisctl enterprise workflow license report
# Export as CSV for spreadsheets
redisctl enterprise workflow license report --format csv > compliance-report.csv
# Generate JSON report for automation
redisctl enterprise workflow license report -o json
License Monitoring
# Monitor all profiles for expiring licenses
redisctl enterprise workflow license monitor
# Custom warning threshold (default 30 days)
redisctl enterprise workflow license monitor --warning-days 60
# Exit with error code if any licenses are expiring (for CI/CD)
redisctl enterprise workflow license monitor --fail-on-warning
Automation Examples
CI/CD License Check
#!/bin/bash
# Check license status in CI/CD pipeline
if ! redisctl enterprise workflow license monitor --warning-days 14 --fail-on-warning; then
echo "ERROR: License issues detected!"
exit 1
fi
License Expiration Script
#!/bin/bash
# Email alert for expiring licenses
COUNT=$(redisctl enterprise workflow license audit --expiring -q 'length(@)')
if [ "$COUNT" -gt 0 ]; then
echo "Warning: $COUNT licenses expiring soon!" | \
mail -s "Redis Enterprise License Alert" admin@company.com
redisctl enterprise workflow license audit --expiring \
-q "[].{profile: profile, expires: expiration_date, days: days_remaining}" -o table
fi
Monthly Compliance Report
#!/bin/bash
# Generate monthly compliance report
REPORT_DATE=$(date +%Y-%m)
REPORT_FILE="license-compliance-${REPORT_DATE}.csv"
# Generate CSV report
redisctl enterprise workflow license report --format csv > "$REPORT_FILE"
# Email the report
echo "Please find attached the monthly license compliance report." | \
mail -s "Redis License Report - $REPORT_DATE" \
-a "$REPORT_FILE" \
compliance@company.com
Automated License Renewal
#!/bin/bash
# Automatically apply new license when available
LICENSE_FILE="/secure/path/new-license.json"
if [ -f "$LICENSE_FILE" ]; then
# Validate the license first
if redisctl enterprise license validate --data @"$LICENSE_FILE"; then
# Apply to all production instances
redisctl enterprise workflow license bulk-update \
--profiles "prod-east,prod-west" \
--data @"$LICENSE_FILE"
# Archive the applied license
mv "$LICENSE_FILE" "/secure/path/applied/$(date +%Y%m%d)-license.json"
else
echo "ERROR: Invalid license file!"
exit 1
fi
fi
Profile Management for Multi-Instance
Setup Multiple Profiles
# Add production profiles
redisctl profile set prod-east \
--deployment-type enterprise \
--url https://redis-east.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_EAST
redisctl profile set prod-west \
--deployment-type enterprise \
--url https://redis-west.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_WEST
# Add staging profile
redisctl profile set staging \
--deployment-type enterprise \
--url https://redis-staging.company.com:9443 \
--username admin@redis.local \
--password $REDIS_PASS_STAGING
Check License Per Profile
# Check specific profile
redisctl -p prod-east enterprise license expiry
redisctl -p prod-west enterprise license usage
redisctl -p staging enterprise license features
Common Use Cases
Pre-Renewal Planning
# Get usage across all instances for capacity planning
for profile in $(redisctl profile list -q '[].name'); do
echo "=== Profile: $profile ==="
redisctl -p "$profile" enterprise license usage -o yaml
done
License Synchronization
# Ensure all instances have the same license
MASTER_LICENSE=$(redisctl -p prod-east enterprise license get -o json)
echo "$MASTER_LICENSE" | \
redisctl enterprise workflow license bulk-update \
--profiles "prod-west,staging,dev" \
--data -
Compliance Dashboard Data
# Generate JSON data for dashboard
{
echo '{"timestamp": "'$(date -Iseconds)'",'
echo '"instances": '
redisctl enterprise workflow license audit -o json
echo '}'
} > dashboard-data.json
Output Formats
All commands support multiple output formats:
# JSON output (default)
redisctl enterprise license get -o json
# YAML output
redisctl enterprise license get -o yaml
# Table output
redisctl enterprise license get -o table
JMESPath Filtering
Use JMESPath queries to extract specific information:
# Get expiration dates for all profiles
redisctl enterprise workflow license audit -q '[].{profile: profile, expires: expiration_date}'
# Filter only expiring licenses
redisctl enterprise workflow license audit -q "[?expiring_soon==`true`]"
# Get usage percentages
redisctl enterprise license usage -q '{
ram_used_pct: (ram.used_gb / ram.limit_gb * `100`),
shards_used_pct: (shards.used / shards.limit * `100`)
}'
Troubleshooting
Common Issues
-
License validation fails
# Check license format redisctl enterprise license validate --data @license.json -
Bulk update fails for some profiles
# Use dry-run to identify issues redisctl enterprise workflow license bulk-update --profiles all --data @license.json --dry-run -
Monitoring shows unexpected results
# Verify profile configurations redisctl profile list # Test connection to each profile for p in $(redisctl profile list -q '[].name'); do echo "Testing $p..." redisctl -p "$p" enterprise cluster get -q 'name' || echo "Failed: $p" done
Notes
- License files can be in JSON format or raw license text
- Workflow commands operate on all configured enterprise profiles
- Use
--dry-runfor bulk operations to preview changes - Monitor commands can integrate with CI/CD pipelines using exit codes
- CSV export format is ideal for spreadsheet analysis and reporting
- All sensitive license data should be handled securely
Debug Info Commands
Collect diagnostic information and support packages for troubleshooting Redis Enterprise clusters.
Overview
Debug info commands gather comprehensive diagnostic data from Redis Enterprise clusters, nodes, and databases. As of Phase 1 improvements, these commands now properly download binary tar.gz support packages that can be directly uploaded to Redis Support.
Available Commands
Collect Cluster Support Package
# Download cluster-wide support package (recommended)
redisctl enterprise debug-info all
# With custom output file
redisctl enterprise debug-info all --file /tmp/cluster-support.tar.gz
# Use new API endpoint (for Redis Enterprise 7.4+)
redisctl enterprise debug-info all --use-new-api
Output: Downloads a tar.gz file containing:
- Complete cluster configuration
- All node information and logs
- Database configurations
- System metrics and diagnostics
- Network configuration
- Performance data
Default filename: support-package-cluster-{timestamp}.tar.gz
Collect Node Support Package
# Download support package for all nodes
redisctl enterprise debug-info node
# Download for specific node
redisctl enterprise debug-info node 1
# With custom output
redisctl enterprise debug-info node 1 --file /tmp/node1-support.tar.gz
Output: Downloads a tar.gz file containing:
- Node configuration and state
- System resources and metrics
- Local log files
- Process information
- Network configuration
Default filename:
- All nodes:
support-package-nodes-{timestamp}.tar.gz - Specific node:
support-package-node-{uid}-{timestamp}.tar.gz
Collect Database Support Package
# Download support package for specific database
redisctl enterprise debug-info database 1
# With custom output
redisctl enterprise debug-info database 1 --file /tmp/db1-support.tar.gz
# Use new API endpoint
redisctl enterprise debug-info database 1 --use-new-api
Output: Downloads a tar.gz file containing:
- Database configuration
- Shard distribution and state
- Replication information
- Performance metrics
- Recent operations and logs
Default filename: support-package-db-{uid}-{timestamp}.tar.gz
Binary Download Support (Phase 1)
Starting with v0.5.1, all debug-info commands properly handle binary responses:
# Downloads actual tar.gz file (not JSON)
redisctl enterprise debug-info all
# Verify the downloaded file
file support-package-cluster-*.tar.gz
# Output: gzip compressed data, from Unix
# Extract and view contents
tar -tzf support-package-cluster-*.tar.gz | head
API Endpoint Compatibility
The tool supports both old (deprecated) and new API endpoints:
| Command | Old Endpoint (default) | New Endpoint (--use-new-api) |
|---|---|---|
all | /v1/debuginfo/all | /v1/cluster/debuginfo |
node | /v1/debuginfo/node | /v1/nodes/{uid}/debuginfo |
database | /v1/debuginfo/all/bdb/{uid} | /v1/bdbs/{uid}/debuginfo |
Note: Old endpoints are deprecated as of Redis Enterprise 7.4. Use --use-new-api for newer clusters.
Common Use Cases
Quick Support Package for Troubleshooting
# Generate support package with automatic naming
redisctl enterprise debug-info all
# Output shows:
# ✓ Support package created successfully
# File: support-package-cluster-20250916-110539.tar.gz
# Size: 305.7 KB
Preparing for Support Ticket
# 1. Generate cluster support package
redisctl enterprise debug-info all --file support-case-12345.tar.gz
# 2. Verify the file
ls -lh support-case-12345.tar.gz
file support-case-12345.tar.gz
# 3. Upload to Redis Support portal
# Reference your case number: 12345
Database-Specific Issues
# Generate package for problematic database
redisctl enterprise debug-info database 1
# The package includes database-specific logs and metrics
# Upload directly to support ticket
Automated Collection Script
#!/bin/bash
# Collect support packages for all components
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
OUTPUT_DIR="./support-$TIMESTAMP"
mkdir -p "$OUTPUT_DIR"
echo "Collecting cluster support package..."
redisctl enterprise debug-info all \
--file "$OUTPUT_DIR/cluster.tar.gz"
echo "Collecting node support packages..."
for node_id in 1 2 3; do
redisctl enterprise debug-info node $node_id \
--file "$OUTPUT_DIR/node-$node_id.tar.gz"
done
echo "Support packages saved to $OUTPUT_DIR"
Important Notes
Security Considerations
- Support packages contain sensitive information (hostnames, IPs, configurations)
- Review contents before sharing if needed
- Delete local copies after uploading to support
- Use secure channels for transmission
Performance Impact
- Package generation may temporarily impact cluster performance
- Large clusters can generate packages over 1GB
- Run during maintenance windows when possible
- Network bandwidth considerations for remote clusters
File Management
- Files are saved in current directory by default
- Use
--fileto specify custom location - Automatic timestamp prevents overwriting
- Clean up old support packages regularly
Progress Indicators
The tool now shows progress during package generation:
⠋ Generating support package...
✓ Support package created successfully
File: support-package-cluster-20250916-110539.tar.gz
Size: 305.7 KB
Troubleshooting
Authentication Errors
If you get authentication errors, ensure correct credentials:
# Check your profile
redisctl profile list
# Use environment variables for testing
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@redis.local"
export REDIS_ENTERPRISE_PASSWORD="your_password"
export REDIS_ENTERPRISE_INSECURE="true"
Large File Sizes
For very large support packages:
# Stream directly to compressed file
redisctl enterprise debug-info all --file >(gzip -9 > support.tar.gz)
# Split large files for upload
split -b 100M support-package.tar.gz support-part-
Verify Package Contents
# List contents without extracting
tar -tzf support-package-cluster-*.tar.gz
# Extract specific files
tar -xzf support-package-cluster-*.tar.gz logs/
# View package info
gzip -l support-package-cluster-*.tar.gz
Related Commands
- Support Package Commands - Enhanced support package workflow (Phase 2)
- Logs Commands - View cluster logs directly
- Stats Commands - Monitor performance metrics
- Cluster Commands - Check cluster health
Diagnostics
The diagnostics commands provide tools for monitoring and troubleshooting Redis Enterprise cluster health, running diagnostic checks, and generating diagnostic reports.
Overview
Redis Enterprise includes a built-in diagnostics system that performs various health checks on the cluster, nodes, and databases. These checks help identify potential issues before they become critical problems.
Available Commands
Get Diagnostics Configuration
Retrieve the current diagnostics configuration:
# Get full diagnostics config
redisctl enterprise diagnostics get
# Get specific configuration fields
redisctl enterprise diagnostics get -q "enabled_checks"
Update Diagnostics Configuration
Modify diagnostics settings:
# Update from JSON file
redisctl enterprise diagnostics update --data @diagnostics-config.json
# Update from stdin
echo '{"check_interval": 300}' | redisctl enterprise diagnostics update --data -
# Disable specific checks
redisctl enterprise diagnostics update --data '{"disabled_checks": ["memory_check", "disk_check"]}'
Run Diagnostics Checks
Trigger diagnostic checks manually:
# Run all diagnostics
redisctl enterprise diagnostics run
# Run with specific parameters
redisctl enterprise diagnostics run --data '{"checks": ["connectivity", "resources"]}'
List Available Checks
View all available diagnostic checks:
# List all checks
redisctl enterprise diagnostics list-checks
# Output as table
redisctl enterprise diagnostics list-checks -o table
Get Latest Report
Retrieve the most recent diagnostics report:
# Get latest report
redisctl enterprise diagnostics last-report
# Get specific sections
redisctl enterprise diagnostics last-report -q "cluster_health"
Get Specific Report
Retrieve a diagnostics report by ID:
# Get report by ID
redisctl enterprise diagnostics get-report <report_id>
# Get report summary only
redisctl enterprise diagnostics get-report <report_id> -q "summary"
List All Reports
View all available diagnostics reports:
# List all reports
redisctl enterprise diagnostics list-reports
# List recent reports only
redisctl enterprise diagnostics list-reports --data '{"limit": 10}'
# Filter by date range
redisctl enterprise diagnostics list-reports --data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}'
Diagnostic Check Types
Common diagnostic checks include:
-
Resource Checks
- Memory utilization
- CPU usage
- Disk space
- Network bandwidth
-
Cluster Health
- Node connectivity
- Replication status
- Shard distribution
- Quorum status
-
Database Health
- Endpoint availability
- Persistence status
- Backup status
- Module functionality
-
Security Checks
- Certificate expiration
- Authentication status
- Encryption settings
- ACL configuration
Configuration Examples
Enable Automatic Diagnostics
{
"enabled": true,
"auto_run": true,
"check_interval": 3600,
"retention_days": 30,
"email_alerts": true,
"alert_recipients": ["ops@example.com"]
}
Configure Check Thresholds
{
"thresholds": {
"memory_usage_percent": 80,
"disk_usage_percent": 85,
"cpu_usage_percent": 75,
"certificate_expiry_days": 30
}
}
Disable Specific Checks
{
"disabled_checks": [
"backup_validation",
"module_check"
],
"check_timeout": 30
}
Practical Examples
Daily Health Check Script
#!/bin/bash
# Run daily diagnostics and email report
# Run diagnostics
redisctl enterprise diagnostics run
# Get latest report
REPORT=$(redisctl enterprise diagnostics last-report)
# Check for critical issues
CRITICAL=$(redisctl enterprise diagnostics last-report -q "issues[?severity=='critical'] | length(@)")
if [ "$CRITICAL" -gt 0 ]; then
# Send alert for critical issues
echo "$REPORT" | mail -s "Redis Enterprise: Critical Issues Found" ops@example.com
fi
Monitor Cluster Health
# Continuous health monitoring
watch -n 60 'redisctl enterprise diagnostics last-report -q "summary" -o table'
Generate Monthly Report
# Get all reports for the month
redisctl enterprise diagnostics list-reports \
--data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}' \
-o json > monthly-diagnostics.json
# Extract key metrics using JMESPath
redisctl enterprise diagnostics list-reports \
--data '{"start_date": "2024-01-01", "end_date": "2024-01-31"}' \
-q "[].{date: timestamp, health_score: summary.health_score}"
Pre-Maintenance Check
# Run comprehensive diagnostics before maintenance
redisctl enterprise diagnostics run --data '{
"comprehensive": true,
"include_logs": true,
"validate_backups": true
}'
# Wait for completion and check results
sleep 30
redisctl enterprise diagnostics last-report -q "ready_for_maintenance"
Report Structure
Diagnostics reports typically include:
{
"report_id": "diag-12345",
"timestamp": "2024-01-15T10:30:00Z",
"cluster_id": "cluster-1",
"summary": {
"health_score": 95,
"total_checks": 50,
"passed": 48,
"warnings": 1,
"failures": 1
},
"cluster_health": {
"nodes": [...],
"databases": [...],
"replication": {...}
},
"resource_usage": {
"memory": {...},
"cpu": {...},
"disk": {...}
},
"issues": [
{
"severity": "warning",
"component": "node-2",
"message": "Disk usage at 82%",
"recommendation": "Consider adding storage"
}
],
"recommendations": [...]
}
Best Practices
- Schedule Regular Checks - Run diagnostics daily or weekly
- Monitor Trends - Track health scores over time
- Set Up Alerts - Configure email alerts for critical issues
- Archive Reports - Keep historical reports for trend analysis
- Pre-Maintenance Checks - Always run diagnostics before maintenance
- Custom Thresholds - Adjust thresholds based on your environment
Integration with Monitoring
The diagnostics system can be integrated with external monitoring tools:
# Export to Prometheus format
redisctl enterprise diagnostics last-report -q "metrics" | \
prometheus-push-gateway
# Send to logging system
redisctl enterprise diagnostics last-report | \
logger -t redis-diagnostics
# Create JIRA ticket for issues
ISSUES=$(redisctl enterprise diagnostics last-report -q "issues")
if [ -n "$ISSUES" ]; then
create-jira-ticket --project OPS --summary "Redis Diagnostics Issues" --description "$ISSUES"
fi
Troubleshooting
Diagnostics Not Running
# Check if diagnostics are enabled
redisctl enterprise diagnostics get -q "enabled"
# Enable diagnostics
redisctl enterprise diagnostics update --data '{"enabled": true}'
Reports Not Generated
# Check last run time
redisctl enterprise diagnostics get -q "last_run"
# Trigger manual run
redisctl enterprise diagnostics run
Missing Checks
# List disabled checks
redisctl enterprise diagnostics get -q "disabled_checks"
# Re-enable all checks
redisctl enterprise diagnostics update --data '{"disabled_checks": []}'
Related Commands
enterprise cluster- Cluster management and healthenterprise stats- Performance statisticsenterprise logs- System logs and eventsenterprise action- Monitor diagnostic task progress
Database Migration
The migration commands provide tools for database import/export operations and migration status tracking in Redis Enterprise.
Available Commands
Get Migration Status
Check the status of a specific migration operation:
# Get migration status
redisctl enterprise migration get 12345
# Get migration status as YAML
redisctl enterprise migration get 12345 -o yaml
# Extract specific fields
redisctl enterprise migration get 12345 -q '{status: status, progress: progress_percentage}'
# Check if migration is complete
redisctl enterprise migration get 12345 -q 'status == "completed"'
Export Database
Export database data for backup or migration:
# Export database
redisctl enterprise migration export 1
# Export and save task ID
TASK_ID=$(redisctl enterprise migration export 1 -q 'task_id')
# Monitor export progress
redisctl enterprise action get $TASK_ID
# Export with specific options (via database commands)
redisctl enterprise database export 1 --data '{
"export_type": "rdb",
"compression": "gzip"
}'
Import Database
Import data into a database:
# Import from RDB file URL
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/backup.rdb.gz",
"import_type": "rdb"
}
EOF
# Import from another database
redisctl enterprise migration import 2 --data '{
"source_type": "database",
"source_database_uid": 1
}'
# Import from file
redisctl enterprise migration import 3 --data @import-config.json
Output Examples
Migration Status
{
"uid": 12345,
"status": "in_progress",
"type": "import",
"database_uid": 1,
"started": "2024-03-15T10:00:00Z",
"progress_percentage": 65,
"estimated_completion": "2024-03-15T10:30:00Z",
"bytes_transferred": 1073741824,
"total_bytes": 1649267441
}
Export Response
{
"task_id": "task-export-67890",
"status": "queued",
"database_uid": 1,
"export_location": "s3://backups/db1-20240315.rdb.gz"
}
Import Response
{
"task_id": "task-import-11111",
"status": "started",
"database_uid": 2,
"source": "https://storage.example.com/backup.rdb.gz"
}
Common Use Cases
Database Backup
Create and manage database backups:
# Export database for backup
redisctl enterprise migration export 1
# Check export status
redisctl enterprise action list -q "[?contains(name, 'export')]"
# Download exported file (if accessible)
EXPORT_URL=$(redisctl enterprise action get <task_id> -q 'result.export_url')
curl -o backup.rdb.gz "$EXPORT_URL"
Database Cloning
Clone a database within the cluster:
# Export source database
EXPORT_TASK=$(redisctl enterprise migration export 1 -q 'task_id')
# Wait for export to complete
redisctl enterprise action wait $EXPORT_TASK
# Get export location
EXPORT_LOC=$(redisctl enterprise action get $EXPORT_TASK -q 'result.location')
# Import to new database
cat <<EOF | redisctl enterprise migration import 2 --data -
{
"source_type": "internal",
"source_location": "$EXPORT_LOC"
}
EOF
Cross-Cluster Migration
Migrate databases between clusters:
# On source cluster: Export database
redisctl enterprise migration export 1
# Note the export location
# Transfer file to destination cluster storage
# (Use appropriate method: S3, FTP, SCP, etc.)
# On destination cluster: Import database
cat <<EOF | redisctl enterprise migration import 1 --data -
{
"source_type": "url",
"source_url": "https://storage.example.com/export.rdb.gz",
"skip_verify_ssl": false
}
EOF
Scheduled Backups
Automate regular database exports:
#!/bin/bash
# backup.sh - Daily backup script
DBS=$(redisctl enterprise database list -q '[].uid' --raw)
for DB in $DBS; do
echo "Backing up database $DB"
TASK=$(redisctl enterprise migration export $DB -q 'task_id')
# Store task IDs for monitoring
echo "$TASK:$DB:$(date +%Y%m%d)" >> backup-tasks.log
done
# Monitor all backup tasks
while read line; do
TASK=$(echo $line | cut -d: -f1)
DB=$(echo $line | cut -d: -f2)
STATUS=$(redisctl enterprise action get $TASK -q 'status')
echo "Database $DB backup: $STATUS"
done < backup-tasks.log
Migration Monitoring
Track migration progress and handle issues:
# List all migration-related tasks
redisctl enterprise action list -q "[?contains(name, 'migration') || contains(name, 'import') || contains(name, 'export')]"
# Monitor specific migration
MIGRATION_ID=12345
while true; do
STATUS=$(redisctl enterprise migration get $MIGRATION_ID -q 'status')
PROGRESS=$(redisctl enterprise migration get $MIGRATION_ID -q 'progress_percentage')
echo "Status: $STATUS, Progress: $PROGRESS%"
[ "$STATUS" = "completed" ] && break
sleep 10
done
# Check for errors
redisctl enterprise migration get $MIGRATION_ID -q 'error'
Error Handling
Handle migration failures:
# Check migration error details
redisctl enterprise migration get <uid> -q '{status: status, error: error_message, failed_at: failed_timestamp}'
# List failed migrations
redisctl enterprise action list -q "[?status == 'failed' && contains(name, 'migration')]"
# Retry failed import
FAILED_CONFIG=$(redisctl enterprise migration get <uid> -q 'configuration')
echo "$FAILED_CONFIG" | redisctl enterprise migration import <bdb_uid> --data -
Best Practices
- Pre-Migration Checks: Verify source and target compatibility
- Test Migrations: Always test with non-production data first
- Monitor Progress: Track migration status throughout the process
- Verify Data: Confirm data integrity after migration
- Schedule Wisely: Run large migrations during maintenance windows
- Keep Backups: Maintain backups before starting migrations
Troubleshooting
Import Failures
When imports fail:
# Check database status
redisctl enterprise database get <bdb_uid> -q 'status'
# Verify available memory
redisctl enterprise database get <bdb_uid> -q '{memory_size: memory_size, used_memory: used_memory}'
# Check cluster resources
redisctl enterprise cluster get -q 'resources'
# Review error logs
redisctl enterprise logs get --filter "database=$BDB_UID"
Export Issues
When exports fail:
# Check disk space on nodes
redisctl enterprise node list -q '[].{node: uid, disk_free: disk_free_size}'
# Verify database is accessible
redisctl enterprise database get <bdb_uid> -q 'status'
# Check export permissions
redisctl enterprise database get <bdb_uid> -q 'backup_configuration'
Related Commands
redisctl enterprise database- Database management including import/exportredisctl enterprise action- Track migration tasksredisctl enterprise cluster- Check cluster resourcesredisctl enterprise logs- View migration-related logs
Overview & Concepts
What is redisctl?
redisctl is a command-line tool for managing Redis Cloud and Redis Enterprise deployments. It provides type-safe API clients, async operation handling, and a library-first architecture.
The Problem
Before redisctl, managing Redis deployments meant:
- Manual UI clicking - No way to script operations
- Fragile bash scripts - curl with hardcoded endpoints and manual JSON parsing
- Polling loops - Writing custom logic to wait for async operations
- Credential exposure - Passwords on command lines or in plaintext
- Reinventing the wheel - Every team writing the same scripts
The Three-Tier Model
redisctl provides three levels of interaction:
1. API Layer
Direct REST access for scripting and automation. Think of it as a smart curl replacement.
# Any endpoint, any method
redisctl api cloud get /subscriptions
redisctl api enterprise post /v1/bdbs -d @database.json
Use when:
- Building automation scripts
- Accessing endpoints not yet wrapped in commands
- You need exact control over requests
2. Human Commands
Type-safe, ergonomic commands with named parameters and built-in help.
redisctl cloud database create \
--subscription 123456 \
--data @database.json \
--wait
redisctl enterprise database list -o table
Use when:
- Day-to-day operations
- Interactive use
- You want
--helpand validation
3. Workflows
Multi-step operations that handle sequencing, polling, and error recovery.
redisctl cloud workflow subscription-setup \
--name production \
--region us-east-1
redisctl enterprise workflow init-cluster \
--cluster-name prod \
--username admin@cluster.local
Use when:
- Setting up new resources
- Operations that require multiple API calls
- You want automatic waiting and error handling
Common Features
All commands share:
- Output formats - JSON (default), YAML, or table
- JMESPath queries - Filter and transform output with
-q - Async handling -
--waitflag for operations that return task IDs - Profile support - Multiple credential sets for different environments
Getting Help
# General help
redisctl --help
# Command help
redisctl cloud --help
redisctl cloud database --help
redisctl cloud database create --help
# List all commands
redisctl cloud database list --help
Next Steps
- Cloud Quick Examples - See the three tiers in action
- Enterprise Quick Examples - Enterprise-specific examples
Cloud Quick Examples
Examples showing the three-tier model for Redis Cloud.
Setup
# Set credentials
export REDIS_CLOUD_API_KEY="your-api-key"
export REDIS_CLOUD_SECRET_KEY="your-secret-key"
# Or use Docker
alias redisctl='docker run --rm \
-e REDIS_CLOUD_API_KEY \
-e REDIS_CLOUD_SECRET_KEY \
ghcr.io/redis-developer/redisctl'
API Layer Examples
Direct REST access for scripting:
# Get account info
redisctl api cloud get /
# List all subscriptions
redisctl api cloud get /subscriptions
# Get specific subscription
redisctl api cloud get /subscriptions/123456
# List databases in subscription
redisctl api cloud get /subscriptions/123456/databases
# Create database (returns task ID)
redisctl api cloud post /subscriptions/123456/databases -d '{
"name": "cache",
"memoryLimitInGb": 1
}'
# Check task status
redisctl api cloud get /tasks/abc-123-def
Human Command Examples
Type-safe operations for daily use:
# List subscriptions with table output
redisctl cloud subscription list -o table
# Get subscription details
redisctl cloud subscription get 123456
# List databases with specific fields
redisctl cloud database list --subscription 123456 \
-q "[].{name:name,status:status,memory:memoryLimitInGb}" \
-o table
# Create database and wait for completion
redisctl cloud database create \
--subscription 123456 \
--name sessions \
--memory 2 \
--wait
# Get connection details
redisctl cloud database get 123456:789 \
-q '{endpoint: publicEndpoint, password: password}'
# Update database memory
redisctl cloud database update 123456:789 \
--data '{"memoryLimitInGb": 4}' \
--wait
# Delete database
redisctl cloud database delete 123456:789 --wait
Workflow Examples
Multi-step operations:
# Set up complete subscription with VPC peering
redisctl cloud workflow subscription-setup \
--name production \
--cloud-provider aws \
--region us-east-1 \
--memory-limit-in-gb 10 \
--with-vpc-peering \
--vpc-id vpc-abc123 \
--vpc-cidr 10.0.0.0/16 \
--create-database \
--database-name cache
This single command:
- Creates the subscription
- Waits for it to be active
- Sets up VPC peering
- Creates the initial database
- Returns connection details
Common Patterns
Get Database Connection String
ENDPOINT=$(redisctl cloud database get 123456:789 -q 'publicEndpoint')
PASSWORD=$(redisctl cloud database get 123456:789 -q 'password')
echo "redis://:$PASSWORD@$ENDPOINT"
List All Databases Across Subscriptions
for sub in $(redisctl cloud subscription list -q '[].id' --raw); do
echo "=== Subscription $sub ==="
redisctl cloud database list --subscription $sub -o table
done
Wait for Operation with Custom Polling
TASK_ID=$(redisctl cloud database create \
--subscription 123456 \
--data @database.json \
-q 'taskId')
while true; do
STATUS=$(redisctl cloud task get $TASK_ID -q 'status')
echo "Status: $STATUS"
case $STATUS in
"processing-completed")
echo "Database created!"
redisctl cloud task get $TASK_ID -q 'response.resourceId'
break
;;
"processing-error")
echo "Failed!"
redisctl cloud task get $TASK_ID -q 'response.error'
exit 1
;;
*) sleep 10 ;;
esac
done
Export Configuration
# Backup subscription and database configs
SUB_ID=123456
redisctl cloud subscription get $SUB_ID > subscription-$SUB_ID.json
redisctl cloud database list --subscription $SUB_ID > databases-$SUB_ID.json
Next Steps
- Enterprise Quick Examples - Enterprise-specific examples
- Cloud Overview - Full Cloud documentation
- Cloud Cookbook - Practical recipes
Enterprise Quick Examples
Examples showing the three-tier model for Redis Enterprise.
Setup
# Set credentials
export REDIS_ENTERPRISE_URL="https://cluster.example.com:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="your-password"
export REDIS_ENTERPRISE_INSECURE="true" # for self-signed certs
# Or use Docker
alias redisctl='docker run --rm \
-e REDIS_ENTERPRISE_URL \
-e REDIS_ENTERPRISE_USER \
-e REDIS_ENTERPRISE_PASSWORD \
-e REDIS_ENTERPRISE_INSECURE \
ghcr.io/redis-developer/redisctl'
API Layer Examples
Direct REST access for scripting:
# Get cluster info
redisctl api enterprise get /v1/cluster
# List all nodes
redisctl api enterprise get /v1/nodes
# List all databases
redisctl api enterprise get /v1/bdbs
# Get specific database
redisctl api enterprise get /v1/bdbs/1
# Create database
redisctl api enterprise post /v1/bdbs -d '{
"name": "cache",
"memory_size": 1073741824
}'
# Get cluster stats
redisctl api enterprise get /v1/cluster/stats/last
Human Command Examples
Type-safe operations for daily use:
# Get cluster info
redisctl enterprise cluster get
# List nodes in table format
redisctl enterprise node list -o table
# List databases with specific fields
redisctl enterprise database list \
-q "[].{id:uid,name:name,memory:memory_size,status:status}" \
-o table
# Create database with flags
redisctl enterprise database create \
--name sessions \
--memory-size 1073741824 \
--port 12000 \
--replication
# Stream cluster stats continuously
redisctl enterprise stats cluster --follow
# Stream database stats every 2 seconds
redisctl enterprise stats database 1 --follow --poll-interval 2
# Check license
redisctl enterprise license get
# Generate support package
redisctl enterprise support-package cluster --optimize --upload
Workflow Examples
Multi-step operations:
# Initialize a new cluster
redisctl enterprise workflow init-cluster \
--cluster-name production \
--username admin@cluster.local \
--password SecurePass123! \
--license-file license.txt \
--create-database \
--database-name default-cache
This single command:
- Bootstraps the cluster
- Sets up authentication
- Applies the license
- Creates an initial database
- Returns cluster details
Common Patterns
Health Check Script
#!/bin/bash
echo "=== Cluster Health ==="
redisctl enterprise cluster get -q '{name:name,status:status}'
echo "=== Nodes ==="
redisctl enterprise node list -q "[].{id:uid,status:status}" -o table
echo "=== Databases ==="
redisctl enterprise database list -q "[].{name:name,status:status}" -o table
echo "=== Active Alerts ==="
redisctl enterprise alerts list -q "[?state=='active']" -o table
Monitor Resources
# Watch cluster stats
watch -n 5 "redisctl enterprise stats cluster \
-q '{cpu:cpu_user,memory:free_memory,conns:total_connections}'"
Bulk Database Operations
# Update eviction policy on all databases
for db in $(redisctl enterprise database list -q '[].uid' --raw); do
echo "Updating database $db"
redisctl enterprise database update $db \
--data '{"eviction_policy": "volatile-lru"}'
done
Export Cluster Configuration
# Backup all configuration
DATE=$(date +%Y%m%d)
redisctl enterprise cluster get > cluster-$DATE.json
redisctl enterprise node list > nodes-$DATE.json
redisctl enterprise database list > databases-$DATE.json
redisctl enterprise user list > users-$DATE.json
Rolling Node Maintenance
#!/bin/bash
for node in $(redisctl enterprise node list -q '[].uid' --raw); do
echo "Maintaining node $node..."
# Enter maintenance mode
redisctl enterprise node action $node maintenance-on
sleep 60
# Do maintenance work here
# Exit maintenance mode
redisctl enterprise node action $node maintenance-off
sleep 30
# Verify healthy
STATUS=$(redisctl enterprise node get $node -q 'status')
[ "$STATUS" != "active" ] && echo "Warning: node $node status is $STATUS"
done
Support Package for Incident
# Generate optimized package and upload to Redis Support
redisctl enterprise support-package cluster \
--optimize \
--upload \
--output support-$(date +%Y%m%d-%H%M%S).tar.gz
Next Steps
- Cloud Quick Examples - Cloud-specific examples
- Enterprise Overview - Full Enterprise documentation
- Enterprise Cookbook - Practical recipes
Cookbook
Task-oriented recipes for common redisctl operations. Each recipe provides complete, copy-paste ready examples to accomplish specific tasks.
Quick Start
New to redisctl? Start here:
Redis Cloud Recipes
Getting Started
- Create Your First Database - 5 minutes
Networking
- Setup VPC Peering - 15-20 minutes
- Configure Private Service Connect (coming soon)
- Configure Transit Gateway (coming soon)
Security
- Configure ACL Security - 10-15 minutes
- Manage SSL/TLS Certificates (coming soon)
Operations
- Backup and Restore Workflow - 10-15 minutes
- Database Migration - 20-30 minutes
- Active-Active Setup - 30-45 minutes
Redis Enterprise Recipes
Enterprise Getting Started
- Create a Database - 5 minutes
Enterprise Operations
- Generate and Upload Support Package - 10 minutes
- Configure Database Replication - 10-15 minutes
- Configure Redis ACLs - 10 minutes
Cluster Management
- Cluster Health Check - 5 minutes
- Node Management - 10-15 minutes
How to Use These Recipes
Each recipe includes:
- Time estimate - How long it takes
- Prerequisites - What you need before starting
- Quick command - One-liner when possible
- Step-by-step - Detailed walkthrough
- Expected output - What success looks like
- Next steps - Related recipes
- Troubleshooting - Common errors and fixes
Contributing Recipes
Have a recipe to share? See our contribution guide.
Need More Detail?
These recipes are designed for quick wins. For comprehensive command documentation, see:
- Cloud Command Reference
- Enterprise Command Reference
Create Your First Redis Cloud Database
⏱️ Time: 5-10 minutes
📋 Prerequisites:
- Redis Cloud account (sign up)
- redisctl installed (installation guide)
- Profile configured with Cloud credentials (authentication guide)
Quick Command
If you already have a subscription, create a database with one command:
redisctl cloud database create \
--subscription YOUR_SUBSCRIPTION_ID \
--data '{"name": "my-first-db", "memoryLimitInGb": 1}' \
--wait
Step-by-Step Guide
1. Verify Your Setup
First, check that redisctl can connect to Redis Cloud:
redisctl cloud subscription list -o table
What you should see:
┌────┬─────────────────┬────────┬────────────┐
│ ID │ Name │ Status │ Provider │
├────┼─────────────────┼────────┼────────────┤
│ 42 │ my-subscription │ active │ AWS │
└────┴─────────────────┴────────┴────────────┘
Troubleshooting:
- ❌ "401 Unauthorized" → Check your API credentials with
redisctl profile get - ❌ Empty table → Create a subscription first (see subscription guide)
2. Choose Your Database Configuration
Decide on your database specifications. Here's a minimal configuration:
{
"name": "my-first-db",
"memoryLimitInGb": 1,
"protocol": "redis"
}
Common options:
memoryLimitInGb: Memory size (1-100+ GB)protocol:redisormemcacheddataPersistence:none,aof-every-1-second,snapshot-every-1-hourreplication:truefor high availability
3. Create the Database
Use the subscription ID from step 1:
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "my-first-db",
"memoryLimitInGb": 1,
"protocol": "redis",
"dataPersistence": "aof-every-1-second",
"replication": true
}' \
--wait \
--wait-timeout 300
What's happening:
--wait: Waits for database to become active--wait-timeout 300: Waits up to 5 minutes- Without
--wait: Returns immediately with task ID
What you should see:
{
"taskId": "abc123...",
"status": "processing"
}
...
Database creation completed successfully!
{
"database_id": 12345,
"name": "my-first-db",
"status": "active",
"public_endpoint": "redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345"
}
4. Get Your Connection Details
Retrieve your database credentials:
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-o json \
-q '{endpoint: public_endpoint, password: password}'
Output:
{
"endpoint": "redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com:12345",
"password": "your-password-here"
}
5. Test Your Connection
Using redis-cli:
redis-cli -h redis-12345.c123.us-east-1-1.ec2.cloud.redislabs.com \
-p 12345 \
-a your-password-here \
PING
Expected response: PONG
Advanced Options
Using a JSON File
For complex configurations, use a file:
# Create database-config.json
cat > database-config.json << 'EOF'
{
"name": "production-db",
"memoryLimitInGb": 10,
"protocol": "redis",
"dataPersistence": "aof-every-1-second",
"replication": true,
"throughputMeasurement": {
"by": "operations-per-second",
"value": 25000
},
"dataEvictionPolicy": "volatile-lru",
"modules": [
{"name": "RedisJSON"}
]
}
EOF
# Create database
redisctl cloud database create \
--subscription 42 \
--data @database-config.json \
--wait
JSON Output for Automation
Use -o json for scripts:
DB_INFO=$(redisctl cloud database create \
--subscription 42 \
--data '{"name": "api-cache", "memoryLimitInGb": 2}' \
--wait \
-o json)
DB_ID=$(redisctl cloud database list --subscription $SUB_ID -q '[0].databaseId')
echo "Created database: $DB_ID"
Common Issues
Database Creation Times Out
Error: Database creation timed out after 300 seconds
Solution: Some regions take longer. Increase timeout:
redisctl cloud database create ... --wait --wait-timeout 600
Insufficient Subscription Capacity
Error: Subscription has insufficient capacity
Solution: Either:
- Delete unused databases:
redisctl cloud database delete ... - Upgrade subscription: Contact Redis support or use the web console
Invalid Configuration
Error: 400 Bad Request - Invalid memory limit
Solution: Check subscription limits:
redisctl cloud subscription get --subscription 42 -q 'pricing'
Next Steps
Now that you have a database:
- 🔒 Configure ACL Security - Secure your database with access controls
- 🌐 Set Up VPC Peering - Connect to your private network
- 💾 Configure Backups - Protect your data
- 📊 Monitor Performance - Track your database metrics
See Also
- Cloud Database Command Reference - Complete command documentation
- Database Configuration Guide - All configuration options
- Redis Cloud Pricing - Understand costs
Setup VPC Peering
Time: 15-20 minutes
Prerequisites:
- Redis Cloud subscription with database
- AWS/GCP/Azure VPC to peer with
- Network admin access to your cloud provider
- redisctl configured with Cloud credentials
What is VPC Peering?
VPC Peering creates a private network connection between your Redis Cloud subscription and your application's VPC, eliminating public internet exposure and reducing latency.
Quick Command
If you already have your VPC details:
redisctl cloud connectivity vpc-peering create \
--subscription YOUR_SUB_ID \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait
Step-by-Step Guide
1. Get Your Subscription Details
First, identify which subscription to peer:
redisctl cloud subscription list -o table -q '[].{id: id, name: name, region: "deployment.regions[0].region"}'
Example output:
┌────┬──────────────┬───────────┐
│ id │ name │ region │
├────┼──────────────┼───────────┤
│ 42 │ production │ us-east-1 │
└────┴──────────────┴───────────┘
2. Gather Your VPC Information
You'll need these details from your cloud provider:
For AWS:
- AWS Account ID (12-digit number)
- VPC ID (starts with
vpc-) - VPC CIDR block (e.g.,
10.0.0.0/16) - Region (must match Redis Cloud region)
For GCP:
- GCP Project ID
- Network name
- Region
For Azure:
- Subscription ID (Azure subscription, not Redis)
- Resource group
- VNet name
- Region
3. Create VPC Peering Request
AWS Example
redisctl cloud connectivity vpc-peering create \
--subscription 42 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123def",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait \
--wait-timeout 600
GCP Example
redisctl cloud connectivity vpc-peering create \
--subscription 42 \
--data '{
"provider_name": "GCP",
"gcp_project_id": "my-project-123",
"network_name": "my-vpc-network",
"gcp_redis_project_id": "redis-project-456",
"gcp_redis_network_name": "redis-network",
"region": "us-central1"
}' \
--wait
Azure Example
redisctl cloud connectivity vpc-peering create \
--subscription 42 \
--data '{
"provider_name": "Azure",
"azure_subscription_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"resource_group": "my-resource-group",
"vnet_name": "my-vnet",
"region": "East US"
}' \
--wait
What you should see:
{
"taskId": "xyz789...",
"status": "processing"
}
...
VPC Peering created successfully!
{
"vpc_peering_id": 123,
"status": "pending-acceptance",
"provider_name": "AWS",
"aws_peering_id": "pcx-abc123def"
}
4. Accept the Peering Connection (AWS Only)
For AWS, you must accept the peering request in your AWS console:
# Get the AWS peering connection ID
redisctl cloud connectivity vpc-peering get \
--subscription 42 \
--peering-id 123 \
-q 'aws_peering_id'
# Output: "pcx-abc123def"
In AWS Console:
- Go to VPC Dashboard
- Click "Peering Connections"
- Find connection
pcx-abc123def - Click "Actions" > "Accept Request"
Via AWS CLI:
aws ec2 accept-vpc-peering-connection \
--vpc-peering-connection-id pcx-abc123def \
--region us-east-1
5. Update Route Tables
Add routes to your VPC route tables to direct Redis traffic through the peering connection:
# Get Redis Cloud CIDR
redisctl cloud subscription get \
--subscription 42 \
-q 'deployment.regions[0].networking.cidr'
# Output: "172.31.0.0/24"
AWS Example:
aws ec2 create-route \
--route-table-id rtb-abc123 \
--destination-cidr-block 172.31.0.0/24 \
--vpc-peering-connection-id pcx-abc123def
6. Update Security Groups
Allow inbound Redis traffic (port 6379 or your database port):
aws ec2 authorize-security-group-ingress \
--group-id sg-abc123 \
--protocol tcp \
--port 6379 \
--cidr 172.31.0.0/24
7. Verify the Connection
Check peering status:
redisctl cloud connectivity vpc-peering get \
--subscription 42 \
--peering-id 123 \
-o json -q '{status: status, aws_peering_id: aws_peering_id}'
Expected status: active
8. Test Connectivity
Get your database endpoint and test from an instance in your VPC:
# Get database endpoint
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-q 'private_endpoint'
# From an EC2 instance in your VPC:
redis-cli -h redis-12345.internal.cloud.redislabs.com -p 12345 PING
Advanced: Active-Active VPC Peering
For Active-Active (CRDB) databases, peer with each region:
# List Active-Active regions
redisctl cloud subscription get --subscription 42 \
-q 'deployment.regions[].{region: region, cidr: networking.cidr}'
# Create peering for each region
redisctl cloud connectivity vpc-peering create-aa \
--subscription 42 \
--region-id 1 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-east-123",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1"
}' \
--wait
Using Configuration Files
For complex setups, use a JSON file:
cat > vpc-peering.json << 'EOF'
{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-abc123def",
"vpc_cidr": "10.0.0.0/16",
"region": "us-east-1",
"vpc_peering_name": "production-redis-peer"
}
EOF
redisctl cloud connectivity vpc-peering create \
--subscription 42 \
--data @vpc-peering.json \
--wait
Common Issues
Peering Request Times Out
Error: VPC peering creation timed out
Solution: Check async operation status manually:
redisctl cloud action get --task-id xyz789...
CIDR Overlap
Error: VPC CIDR blocks overlap
Solution: Redis Cloud and your VPC cannot have overlapping CIDR blocks. Either:
- Choose a different CIDR for new subscription
- Use a different VPC with non-overlapping CIDR
Peering Stuck in "pending-acceptance"
Solution: For AWS, you must manually accept the peering request (see Step 4)
Cannot Connect After Peering
Troubleshooting checklist:
- Verify peering status is
active - Check route tables have correct routes
- Verify security groups allow Redis port
- Ensure database has private endpoint enabled
- Test from instance actually in the peered VPC
Monitoring VPC Peering
List all peerings for a subscription:
redisctl cloud connectivity vpc-peering list \
--subscription 42 \
-o table \
-q '[].{id: id, status: status, provider: provider_name, region: region}'
Deleting VPC Peering
redisctl cloud connectivity vpc-peering delete \
--subscription 42 \
--peering-id 123 \
--wait
This also removes the peering from your cloud provider.
Next Steps
- Configure ACL Security - Secure your private database
- Setup Private Service Connect - Alternative private connectivity for GCP
- Configure Transit Gateway - Multi-VPC connectivity for AWS
- Monitor Performance - Track latency improvements
See Also
- VPC Peering Command Reference - Complete command documentation
- Redis Cloud Networking Guide - Official docs
- AWS VPC Peering - AWS documentation
Configure ACL Security
Time: 10-15 minutes
Prerequisites:
- Redis Cloud database already created
- redisctl configured with Cloud credentials
- Basic understanding of Redis ACL commands
What are ACLs?
Access Control Lists (ACLs) allow you to create users with specific permissions, limiting which commands they can run and which keys they can access. This is essential for:
- Multi-tenant applications
- Restricting administrative access
- Compliance requirements
- Defense in depth security
Quick Command
Create a read-only user for your application:
# Create Redis rule
redisctl cloud acl create-redis-rule \
--subscription YOUR_SUB_ID \
--data '{"name": "readonly-rule", "rule": "+@read ~*"}' \
--wait
# Create role with the rule
redisctl cloud acl create-role \
--subscription YOUR_SUB_ID \
--data '{"name": "readonly-role", "redis_rules": [{"rule_name": "readonly-rule"}]}' \
--wait
# Create user with the role
redisctl cloud acl create-acl-user \
--subscription YOUR_SUB_ID \
--data '{"name": "app-reader", "role": "readonly-role", "password": "SecurePass123!"}' \
--wait
Step-by-Step Guide
Understanding the ACL Hierarchy
Redis Cloud uses a three-level ACL system:
- Redis Rules - Define command and key access patterns (Redis ACL syntax)
- Roles - Group multiple Redis rules together
- Users - Assigned one role and a password
1. List Existing ACL Components
# View current Redis rules
redisctl cloud acl list-redis-rules --subscription 42 -o table
# View current roles
redisctl cloud acl list-roles --subscription 42 -o table
# View current users
redisctl cloud acl list-acl-users --subscription 42 -o table
2. Create Redis ACL Rules
Redis rules use standard Redis ACL syntax.
Common Rule Patterns
Read-only access:
redisctl cloud acl create-redis-rule \
--subscription 42 \
--data '{
"name": "readonly",
"rule": "+@read ~*"
}' \
--wait
Write-only to specific keys:
redisctl cloud acl create-redis-rule \
--subscription 42 \
--data '{
"name": "write-metrics",
"rule": "+set +del ~metrics:*"
}' \
--wait
Full access except dangerous commands:
redisctl cloud acl create-redis-rule \
--subscription 42 \
--data '{
"name": "safe-admin",
"rule": "+@all -@dangerous ~*"
}' \
--wait
Access to specific key prefix:
redisctl cloud acl create-redis-rule \
--subscription 42 \
--data '{
"name": "user-sessions",
"rule": "+@all ~session:*"
}' \
--wait
3. Create ACL Roles
Roles combine one or more Redis rules:
# Simple role with one rule
redisctl cloud acl create-role \
--subscription 42 \
--data '{
"name": "readonly-role",
"redis_rules": [
{"rule_name": "readonly"}
]
}' \
--wait
# Complex role with multiple rules
redisctl cloud acl create-role \
--subscription 42 \
--data '{
"name": "app-worker",
"redis_rules": [
{"rule_name": "readonly"},
{"rule_name": "write-metrics"}
]
}' \
--wait
4. Create ACL Users
Users are assigned a role and password:
redisctl cloud acl create-acl-user \
--subscription 42 \
--data '{
"name": "app-reader",
"role": "readonly-role",
"password": "SecureReadOnlyPass123!"
}' \
--wait
What you should see:
{
"taskId": "abc123...",
"status": "processing"
}
...
ACL user created successfully!
{
"id": 456,
"name": "app-reader",
"role": "readonly-role",
"status": "active"
}
5. Assign Users to Databases
After creating users, assign them to specific databases:
# Get database ID
redisctl cloud database list \
--subscription 42 \
-q '[].{id: database_id, name: name}'
# Update database with ACL users
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"security": {
"users": ["app-reader", "app-writer"]
}
}' \
--wait
6. Test ACL User
Connect to your database with the new user:
# Get database endpoint
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-q '{endpoint: public_endpoint, port: port}'
# Test connection
redis-cli -h redis-12345.cloud.redislabs.com \
-p 12345 \
--user app-reader \
--pass SecureReadOnlyPass123! \
PING
# Test permissions (should succeed)
redis-cli --user app-reader --pass SecureReadOnlyPass123! \
-h redis-12345.cloud.redislabs.com -p 12345 \
GET mykey
# Test restricted command (should fail)
redis-cli --user app-reader --pass SecureReadOnlyPass123! \
-h redis-12345.cloud.redislabs.com -p 12345 \
SET mykey value
# Error: NOPERM this user has no permissions to run the 'set' command
Common ACL Patterns
Application Access Pattern
Separate users for read, write, and admin operations:
# Read-only for queries
redisctl cloud acl create-redis-rule --subscription 42 \
--data '{"name": "app-read", "rule": "+@read +@connection ~*"}' --wait
# Write access for updates
redisctl cloud acl create-redis-rule --subscription 42 \
--data '{"name": "app-write", "rule": "+@write +@read +@connection ~*"}' --wait
# Admin for maintenance
redisctl cloud acl create-redis-rule --subscription 42 \
--data '{"name": "app-admin", "rule": "+@all ~*"}' --wait
# Create roles and users
redisctl cloud acl create-role --subscription 42 \
--data '{"name": "reader", "redis_rules": [{"rule_name": "app-read"}]}' --wait
redisctl cloud acl create-role --subscription 42 \
--data '{"name": "writer", "redis_rules": [{"rule_name": "app-write"}]}' --wait
redisctl cloud acl create-role --subscription 42 \
--data '{"name": "admin", "redis_rules": [{"rule_name": "app-admin"}]}' --wait
Multi-Tenant Pattern
Isolate tenants by key prefix:
# Tenant A access
redisctl cloud acl create-redis-rule --subscription 42 \
--data '{"name": "tenant-a", "rule": "+@all ~tenant:a:*"}' --wait
# Tenant B access
redisctl cloud acl create-redis-rule --subscription 42 \
--data '{"name": "tenant-b", "rule": "+@all ~tenant:b:*"}' --wait
# Create roles and users
redisctl cloud acl create-role --subscription 42 \
--data '{"name": "tenant-a-role", "redis_rules": [{"rule_name": "tenant-a"}]}' --wait
redisctl cloud acl create-acl-user --subscription 42 \
--data '{"name": "tenant-a-user", "role": "tenant-a-role", "password": "TenantAPass123!"}' --wait
Using Configuration Files
For complex ACL setups:
cat > acl-setup.json << 'EOF'
{
"rules": [
{
"name": "readonly",
"rule": "+@read ~*"
},
{
"name": "write-cache",
"rule": "+set +get +del +expire ~cache:*"
}
],
"roles": [
{
"name": "cache-worker",
"redis_rules": [
{"rule_name": "readonly"},
{"rule_name": "write-cache"}
]
}
],
"users": [
{
"name": "worker-1",
"role": "cache-worker",
"password": "Worker1Pass!"
}
]
}
EOF
# Create rules - using jq to extract JSON objects from array
for rule in $(jq -c '.rules[]' acl-setup.json); do
redisctl cloud acl create-redis-rule \
--subscription 42 \
--data "$rule" \
--wait
done
# Create roles
for role in $(jq -c '.roles[]' acl-setup.json); do
redisctl cloud acl create-role \
--subscription 42 \
--data "$role" \
--wait
done
# Create users
for user in $(jq -c '.users[]' acl-setup.json); do
redisctl cloud acl create-acl-user \
--subscription 42 \
--data "$user" \
--wait
done
Redis ACL Syntax Reference
Common patterns in Redis ACL rules:
Command categories:
+@read- All read commands+@write- All write commands+@admin- Administrative commands+@dangerous- Dangerous commands (FLUSHDB, KEYS, etc.)+@all- All commands-@dangerous- Deny dangerous commands
Specific commands:
+get- Allow GET command+set- Allow SET command-flushdb- Deny FLUSHDB
Key patterns:
~*- All keys~cache:*- Keys starting with "cache:"~user:*- Keys starting with "user:"~*~-secret:*- All keys except those starting with "secret:"
Managing ACLs
View ACL Details
# Get specific user details
redisctl cloud acl get-acl-user \
--subscription 42 \
--user-id 456 \
-o json
# List all users with their roles
redisctl cloud acl list-acl-users \
--subscription 42 \
-o json \
-q '[].{name: name, role: role, id: id}'
Update ACL Rules
# Update existing rule
redisctl cloud acl update-redis-rule \
--subscription 42 \
--rule-id 789 \
--data '{
"name": "readonly",
"rule": "+@read +@connection ~*"
}' \
--wait
Update User Password
redisctl cloud acl update-acl-user \
--subscription 42 \
--user-id 456 \
--data '{
"password": "NewSecurePass456!"
}' \
--wait
Delete ACL Components
# Delete user
redisctl cloud acl delete-acl-user \
--subscription 42 \
--user-id 456 \
--wait
# Delete role
redisctl cloud acl delete-role \
--subscription 42 \
--role-id 321 \
--wait
# Delete Redis rule
redisctl cloud acl delete-redis-rule \
--subscription 42 \
--rule-id 789 \
--wait
Common Issues
Cannot Create User with Reserved Name
Error: User name 'default' is reserved
Solution: Avoid reserved names: default, admin. Use descriptive application-specific names.
ACL Rule Syntax Error
Error: Invalid ACL rule syntax
Solution: Test your ACL rule locally first:
redis-cli ACL SETUSER testuser "+@read ~*"
redis-cli ACL GETUSER testuser
redis-cli ACL DELUSER testuser
User Cannot Connect
Troubleshooting:
- Verify user is assigned to the database
- Check password is correct
- Ensure user status is "active"
- Test with default user first to isolate ACL vs. network issues
Permission Denied
Error: NOPERM this user has no permissions to run the 'set' command
Solution: Review and update the user's role and rules:
# Check user's role
redisctl cloud acl get-acl-user --subscription 42 --user-id 456 -q 'role'
# Check role's rules
redisctl cloud acl list-roles --subscription 42 -q '[?name==`readonly-role`]'
Best Practices
- Principle of Least Privilege: Give users only the permissions they need
- Use Key Prefixes: Design your key naming to support ACLs (e.g.,
user:123:profile) - Separate Credentials: Different users for read vs. write operations
- Rotate Passwords: Regularly update user passwords
- Test Before Production: Verify ACL rules in a test database first
- Document Rules: Keep track of what each rule and role does
Next Steps
- Setup VPC Peering - Private network connectivity
- Configure TLS/SSL - Encryption in transit
- Backup and Restore - Protect your data
- Monitor Performance - Track database metrics
See Also
- ACL Command Reference - Complete command documentation
- Redis ACL Documentation - Redis ACL syntax
- Redis Cloud Security - Security best practices
Backup and Restore Workflow
Time: 10-15 minutes
Prerequisites:
- Redis Cloud database with data persistence enabled
- redisctl configured with Cloud credentials
- Storage location configured (done automatically for Cloud)
What are Backups?
Redis Cloud provides automated backups and on-demand manual backups to protect your data. Backups can be:
- Automated - Scheduled periodic backups (hourly, daily, weekly)
- Manual - On-demand backups triggered when needed
- Stored - In Redis Cloud storage or your own cloud storage (AWS S3, GCP GCS, Azure Blob)
Quick Commands
# Trigger manual backup
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Check backup status
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json
Step-by-Step Guide
1. Check Current Backup Configuration
View your database's backup settings:
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-o json \
-q '{
data_persistence: data_persistence,
backup_interval: backup_interval,
backup_path: backup_path
}'
Example output:
{
"data_persistence": "aof-every-1-second",
"backup_interval": "every-24-hours",
"backup_path": "redis-cloud-storage"
}
2. Configure Backup Settings
If backups aren't configured, enable them:
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"data_persistence": "aof-every-1-second",
"backup_interval": "every-24-hours"
}' \
--wait
Backup interval options:
every-12-hours- Twice dailyevery-24-hours- Daily (recommended for most)every-week- Weekly
3. Trigger Manual Backup
Create an on-demand backup before major changes:
redisctl cloud database backup \
--database-id 42:12345 \
--wait \
--wait-timeout 600
What you should see:
{
"taskId": "backup-abc123",
"status": "processing"
}
...
Backup completed successfully!
{
"backup_id": "bkp-20251007-143022",
"status": "completed",
"size_bytes": 10485760,
"timestamp": "2025-10-07T14:30:22Z"
}
4. Monitor Backup Status
Check backup progress and history:
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json
Example output:
{
"last_backup": {
"backup_id": "bkp-20251007-143022",
"status": "completed",
"timestamp": "2025-10-07T14:30:22Z",
"size_bytes": 10485760,
"type": "manual"
},
"next_scheduled": "2025-10-08T14:00:00Z",
"backup_progress": null
}
5. List Available Backups
View all backups for a database:
# Get subscription backup info
redisctl cloud subscription get \
--subscription 42 \
-o json \
-q 'databases[?database_id==`12345`].backup_status'
Restore Scenarios
Scenario 1: Restore from Recent Backup
If you need to restore to a previous state:
# Create new database from backup
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "restored-db",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
Note: Redis Cloud doesn't support in-place restore. You create a new database from a backup, verify it, then switch your application.
Scenario 2: Point-in-Time Recovery
For databases with AOF persistence:
# Create database with specific backup
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "pit-restore",
"memory_limit_in_gb": 2,
"restore_from_backup": {
"backup_id": "bkp-20251007-120000",
"timestamp": "2025-10-07T14:00:00Z"
}
}' \
--wait
Scenario 3: Clone Production to Staging
Use backups to create staging environments:
# Get latest production backup using JMESPath
BACKUP_ID=$(redisctl cloud database backup-status \
--database-id 42:12345 \
-q 'last_backup.backup_id' \
--raw)
# Create staging database from production backup
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "staging-db",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "'$BACKUP_ID'"
}
}' \
--wait
Advanced: Custom Backup Storage
Configure S3 Backup Storage
Store backups in your own AWS S3 bucket:
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"backup_path": "s3://my-backup-bucket/redis-backups",
"backup_s3_access_key_id": "AKIAIOSFODNN7EXAMPLE",
"backup_s3_secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
}' \
--wait
Configure GCS Backup Storage
Store backups in Google Cloud Storage:
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"backup_path": "gs://my-backup-bucket/redis-backups",
"backup_gcs_credentials": "$(cat gcs-key.json | jq -c .)"
}' \
--wait
Configure Azure Blob Storage
Store backups in Azure:
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"backup_path": "abs://my-storage-account/redis-backups",
"backup_abs_account_name": "mystorageaccount",
"backup_abs_account_key": "your-account-key"
}' \
--wait
Backup Automation Strategy
Daily Backups with Retention
#!/bin/bash
# backup-daily.sh - Daily backup script
SUBSCRIPTION_ID=42
DATABASE_ID=12345
DATE=$(date +%Y%m%d)
echo "Starting daily backup for database $DATABASE_ID..."
# Trigger backup
redisctl cloud database backup \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
--wait \
--wait-timeout 900 \
-o json | tee backup-${DATE}.log
# Check backup status
redisctl cloud database backup-status \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
-o json \
-q 'last_backup.{id: backup_id, status: status, size_mb: (size_bytes / 1048576)}'
echo "Backup completed: $DATE"
Schedule with cron:
# Daily at 2 AM
0 2 * * * /path/to/backup-daily.sh >> /var/log/redis-backup.log 2>&1
Pre-Deployment Backup
#!/bin/bash
# pre-deploy-backup.sh - Backup before deployments
SUBSCRIPTION_ID=42
DATABASE_ID=12345
DEPLOYMENT_ID=$(git rev-parse --short HEAD)
echo "Creating pre-deployment backup for $DEPLOYMENT_ID..."
# Trigger backup
BACKUP_RESULT=$(redisctl cloud database backup \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
--wait \
-o json)
# Extract backup_id using jq (result is JSON from redisctl)
BACKUP_ID=$(echo "$BACKUP_RESULT" | jq -r '.backup_id')
echo "Backup created: $BACKUP_ID"
echo "Safe to proceed with deployment $DEPLOYMENT_ID"
# Save backup ID for potential rollback
echo "$BACKUP_ID" > .last-backup-id
Backup Verification
Verify Backup Integrity
# Create test database from backup
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "backup-verify",
"memory_limit_in_gb": 1,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
# Test data integrity (example with known key)
redis-cli -h backup-verify-endpoint -p 12346 GET test-key
# Clean up test database
redisctl cloud database delete \
--subscription 42 \
--database-id 67890 \
--wait
Monitoring Backup Health
Check Backup Metrics
# Get backup statistics
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json \
-q '{
last_backup_age: ((now - last_backup.timestamp) / 86400 | floor),
backup_size_mb: (last_backup.size_bytes / 1048576 | floor),
next_backup: next_scheduled,
status: last_backup.status
}'
Alert on Backup Failures
#!/bin/bash
# check-backup-health.sh
SUBSCRIPTION_ID=42
DATABASE_ID=12345
MAX_AGE_HOURS=36
BACKUP_STATUS=$(redisctl cloud database backup-status \
--database-id ${SUBSCRIPTION_ID}:${DATABASE_ID} \
-o json)
# Parse JSON output with jq
LAST_BACKUP_TIME=$(echo "$BACKUP_STATUS" | jq -r '.last_backup.timestamp')
LAST_BACKUP_STATUS=$(echo "$BACKUP_STATUS" | jq -r '.last_backup.status')
# Calculate age in hours
CURRENT_TIME=$(date +%s)
BACKUP_TIME=$(date -d "$LAST_BACKUP_TIME" +%s)
AGE_HOURS=$(( ($CURRENT_TIME - $BACKUP_TIME) / 3600 ))
if [ "$LAST_BACKUP_STATUS" != "completed" ] || [ $AGE_HOURS -gt $MAX_AGE_HOURS ]; then
echo "ALERT: Backup health check failed!"
echo "Status: $LAST_BACKUP_STATUS"
echo "Age: $AGE_HOURS hours"
# Send alert (email, Slack, PagerDuty, etc.)
exit 1
fi
echo "Backup health OK - Last backup: $AGE_HOURS hours ago"
Disaster Recovery Plan
1. Document Current State
# Save current database configuration
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-o json > database-config-$(date +%Y%m%d).json
# Record backup details
redisctl cloud database backup-status \
--database-id 42:12345 \
-o json > backup-status-$(date +%Y%m%d).json
2. Test Recovery Procedure
Regularly test your restore process:
# Quarterly DR test
./scripts/dr-test.sh production-db test-restore-db
3. Recovery Time Objective (RTO)
Estimate restore time based on database size:
- Small (< 1GB): 5-10 minutes
- Medium (1-10GB): 15-30 minutes
- Large (> 10GB): 30-60+ minutes
Common Issues
Backup Takes Too Long
Error: Backup timed out after 300 seconds
Solution: Increase timeout for large databases:
redisctl cloud database backup \
--database-id 42:12345 \
--wait \
--wait-timeout 1800 # 30 minutes
Restore Fails with "Backup Not Found"
Error: Backup ID not found
Solution: List available backups and verify ID:
redisctl cloud database backup-status \
--database-id 42:12345 \
-q 'last_backup.backup_id'
Insufficient Storage for Backup
Error: Insufficient storage space
Solution:
- Review backup retention policy
- Clean up old backups
- Upgrade storage capacity
- Use custom storage (S3/GCS/Azure)
Restored Database Has Missing Data
Troubleshooting:
- Check backup timestamp vs. expected data
- Verify AOF persistence was enabled
- Check if backup completed successfully
- Consider point-in-time recovery if available
Best Practices
- Enable Persistence: Always use AOF or snapshot persistence
- Multiple Backup Windows: Daily automated + manual before changes
- Test Restores: Regularly verify backups can be restored
- Off-Site Backups: Use custom storage in different region
- Monitor Backup Age: Alert if backups are too old
- Document Procedures: Maintain runbooks for recovery
- Verify Backup Size: Sudden size changes may indicate issues
Next Steps
- Database Migration - Migrate data between databases
- Monitor Performance - Track database health
- Configure ACLs - Secure your restored database
- Setup High Availability - Add redundancy
See Also
- Database Backup Reference - Complete command documentation
- Redis Cloud Backup Guide - Official backup documentation
- Data Persistence Options - Understanding AOF vs. RDB
Database Migration
Time: 20-30 minutes
Prerequisites:
- Source and destination databases (Redis Cloud or external Redis)
- redisctl configured with Cloud credentials
- Network connectivity between source and destination
Migration Strategies
Three common migration approaches:
- Import from backup - Best for one-time migrations
- Online replication - For minimal downtime
- RIOT (Redis Input/Output Tool) - For complex transformations
Quick Migration (from backup)
# Create backup from source
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Create new database from backup
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "migrated-db",
"memory_limit_in_gb": 2,
"restore_from_backup": {
"backup_id": "bkp-20251007-143022"
}
}' \
--wait
Method 1: Import from RDB File
1. Export from Source Database
# If source is Redis Cloud, create backup and download
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# Get backup URL
redisctl cloud database backup-status \
--database-id 42:12345 \
-q 'last_backup.download_url'
# Download backup
curl -o source-backup.rdb "https://backup-url..."
2. Upload to Cloud Storage
# Upload to S3
aws s3 cp source-backup.rdb s3://my-bucket/redis-migration/
# Get presigned URL (valid for import)
aws s3 presign s3://my-bucket/redis-migration/source-backup.rdb --expires-in 3600
3. Import to Destination Database
# Import data
redisctl cloud database import \
--database-id 43:67890 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://presigned-url..."
}' \
--wait \
--wait-timeout 1800
4. Monitor Import Progress
# Check import status
redisctl cloud database import-status \
--database-id 43:67890 \
-o json -q '{
status: status,
progress: progress_percentage,
imported_keys: keys_imported
}'
Method 2: Online Replication
For minimal downtime, use Redis replication:
1. Setup Destination as Replica
# Create destination database with replication source
redisctl cloud database create \
--subscription 43 \
--data '{
"name": "replica-db",
"memory_limit_in_gb": 2,
"replication": true,
"replica_of": ["redis-12345.cloud.redislabs.com:12345"]
}' \
--wait
2. Monitor Replication Lag
# Check replication status
redis-cli -h replica-endpoint -p 67890 INFO replication
3. Cutover to New Database
# Stop writes to source
# Wait for replication to catch up (lag = 0)
# Promote replica to master
redisctl cloud database update \
--subscription 43 \
--database-id 67890 \
--data '{"replica_of": []}' \
--wait
# Update application to use new endpoint
Method 3: Cross-Region Migration
Migrate between different Redis Cloud regions:
# 1. Create backup in source region
redisctl cloud database backup \
--database-id 42:12345 \
--wait
# 2. Export backup to S3 in target region
# (This happens automatically with cross-region backup storage)
# 3. Create database in target region from backup
redisctl cloud database create \
--subscription 55 \
--data '{
"name": "us-west-db",
"memory_limit_in_gb": 2,
"region": "us-west-2",
"restore_from_backup": {
"backup_id": "bkp-20251007-143022",
"source_subscription_id": 42
}
}' \
--wait
Migration from External Redis
From Self-Hosted Redis
# 1. Create RDB backup on source
redis-cli --rdb /tmp/redis-backup.rdb
# 2. Upload to cloud storage
aws s3 cp /tmp/redis-backup.rdb s3://my-bucket/migration/
aws s3 presign s3://my-bucket/migration/redis-backup.rdb --expires-in 3600
# 3. Import to Redis Cloud
redisctl cloud database import \
--database-id 42:12345 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://presigned-url..."
}' \
--wait
From AWS ElastiCache
# 1. Create ElastiCache backup
aws elasticache create-snapshot \
--replication-group-id my-redis \
--snapshot-name migration-snapshot
# 2. Export to S3
aws elasticache copy-snapshot \
--source-snapshot-name migration-snapshot \
--target-snapshot-name migration-export \
--target-bucket my-bucket
# 3. Import to Redis Cloud (same as above)
Data Validation
Verify Migration Success
#!/bin/bash
# validate-migration.sh
SOURCE_HOST="source-redis"
SOURCE_PORT=6379
DEST_HOST="dest-redis"
DEST_PORT=12345
echo "Validating migration..."
# Compare key counts
SOURCE_KEYS=$(redis-cli -h $SOURCE_HOST -p $SOURCE_PORT DBSIZE)
DEST_KEYS=$(redis-cli -h $DEST_HOST -p $DEST_PORT DBSIZE)
echo "Source keys: $SOURCE_KEYS"
echo "Destination keys: $DEST_KEYS"
if [ "$SOURCE_KEYS" -eq "$DEST_KEYS" ]; then
echo "Key count matches!"
else
echo "WARNING: Key count mismatch!"
exit 1
fi
# Sample key validation
redis-cli -h $SOURCE_HOST -p $SOURCE_PORT --scan --pattern "*" | \
head -100 | \
while read key; do
SOURCE_VAL=$(redis-cli -h $SOURCE_HOST -p $SOURCE_PORT GET "$key")
DEST_VAL=$(redis-cli -h $DEST_HOST -p $DEST_PORT GET "$key")
if [ "$SOURCE_VAL" != "$DEST_VAL" ]; then
echo "Mismatch for key: $key"
exit 1
fi
done
echo "Validation successful!"
Zero-Downtime Migration Pattern
#!/bin/bash
# zero-downtime-migration.sh
# 1. Setup replication
echo "Setting up replication..."
redisctl cloud database update \
--subscription 43 \
--database-id 67890 \
--data '{"replica_of": ["source-redis:6379"]}' \
--wait
# 2. Monitor lag until synced
echo "Waiting for initial sync..."
while true; do
LAG=$(redis-cli -h new-redis -p 67890 INFO replication | \
grep master_repl_offset | cut -d: -f2)
if [ "$LAG" -lt 100 ]; then
break
fi
sleep 5
done
echo "Replication synced. Ready for cutover."
echo "Press ENTER to proceed with cutover..."
read
# 3. Stop writes to source (application-specific)
echo "Stop writes to source now!"
echo "Press ENTER when source is read-only..."
read
# 4. Wait for final sync
sleep 10
# 5. Promote replica
echo "Promoting replica to master..."
redisctl cloud database update \
--subscription 43 \
--database-id 67890 \
--data '{"replica_of": []}' \
--wait
echo "Migration complete! Update application to new endpoint."
Handling Large Databases
For databases > 10GB:
# 1. Use parallel import (if supported)
redisctl cloud database import \
--database-id 42:12345 \
--data '{
"source_type": "http-url",
"import_from_uri": "https://backup-url...",
"parallel_streams": 4
}' \
--wait \
--wait-timeout 7200 # 2 hours
Common Issues
Import Times Out
# Increase timeout for large databases
redisctl cloud database import \
--database-id 42:12345 \
--data '{"source_type": "http-url", "import_from_uri": "..."}' \
--wait \
--wait-timeout 3600 # 1 hour
RDB Version Mismatch
Error: Unsupported RDB version
Solution: Ensure source Redis version is compatible. Redis Cloud supports RDB versions from Redis 2.6+
Network Timeout During Import
Error: Failed to download from URI
Solution:
- Verify URL is accessible
- Check presigned URL hasn't expired
- Ensure no firewall blocks
- Use cloud storage in same region
Partial Import
Warning: Import completed but key count mismatch
Solution:
- Check for keys with TTL that expired
- Verify no writes during migration
- Check for maxmemory-policy evictions
- Review logs for specific errors
Best Practices
- Test First - Always test migration on staging
- Backup Source - Create backup before migration
- Plan Downtime - Communicate maintenance window
- Validate Data - Compare key counts and sample data
- Monitor Performance - Watch latency during cutover
- Keep Source - Don't delete source immediately
- Update DNS - Use DNS for easy rollback
Migration Checklist
- [ ] Source database backed up
- [ ] Destination database created and configured
- [ ] Network connectivity verified
- [ ] Import method selected
- [ ] Dry run completed successfully
- [ ] Monitoring in place
- [ ] Rollback plan documented
- [ ] Application updated with new endpoint
- [ ] Data validation successful
- [ ] Source database retained for N days
Next Steps
- Backup and Restore - Protect migrated data
- Configure ACLs - Secure new database
- Monitor Performance - Track after migration
- Setup High Availability - Add redundancy
See Also
- Database Import Reference
- Redis Migration Guide
- RIOT Tool - Advanced migration tool
Active-Active (CRDB) Setup
Time: 30-45 minutes
Prerequisites:
- Redis Cloud account with Active-Active subscription
- redisctl configured with Cloud credentials
- Understanding of multi-region deployments
What is Active-Active?
Active-Active (Conflict-free Replicated Database, CRDB) provides:
- Multiple writable regions simultaneously
- Automatic conflict resolution
- Local read/write latency in each region
- Geographic redundancy and disaster recovery
Quick Setup
# Create Active-Active subscription
redisctl cloud subscription create \
--data '{
"name": "global-aa",
"deployment_type": "active-active",
"regions": [
{"region": "us-east-1", "networking": {"cidr": "10.0.1.0/24"}},
{"region": "eu-west-1", "networking": {"cidr": "10.0.2.0/24"}},
{"region": "ap-southeast-1", "networking": {"cidr": "10.0.3.0/24"}}
]
}' \
--wait
# Create Active-Active database
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "global-cache",
"memory_limit_in_gb": 2,
"support_oss_cluster_api": true,
"data_persistence": "aof-every-1-second",
"replication": true
}' \
--wait
Step-by-Step Setup
1. Plan Your Regions
Choose regions close to your users:
# List available regions
redisctl cloud region list -o json -q '[].{
region: region,
provider: provider,
availability_zones: availability_zones
}'
Common patterns:
- US + EU: us-east-1, eu-west-1
- Global: us-east-1, eu-west-1, ap-southeast-1
- US Multi-Region: us-east-1, us-west-2
2. Create Active-Active Subscription
redisctl cloud subscription create \
--data '{
"name": "production-aa",
"deployment_type": "active-active",
"payment_method_id": 12345,
"cloud_provider": "AWS",
"regions": [
{
"region": "us-east-1",
"networking": {
"cidr": "10.1.0.0/24"
},
"preferred_availability_zones": ["use1-az1", "use1-az2"]
},
{
"region": "eu-west-1",
"networking": {
"cidr": "10.2.0.0/24"
},
"preferred_availability_zones": ["euw1-az1", "euw1-az2"]
}
]
}' \
--wait \
--wait-timeout 900
Important: Each region needs a unique CIDR block.
3. Create Active-Active Database
redisctl cloud database create \
--subscription 42 \
--data '{
"name": "global-sessions",
"memory_limit_in_gb": 5,
"protocol": "redis",
"support_oss_cluster_api": true,
"data_persistence": "aof-every-1-second",
"replication": true,
"throughput_measurement": {
"by": "operations-per-second",
"value": 50000
},
"data_eviction_policy": "volatile-lru",
"modules": [
{"name": "RedisJSON"}
]
}' \
--wait
4. Get Regional Endpoints
# Get all regional endpoints
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-o json \
-q '{
name: name,
endpoints: regions[].{
region: region,
public_endpoint: public_endpoint,
private_endpoint: private_endpoint
}
}'
Example output:
{
"name": "global-sessions",
"endpoints": [
{
"region": "us-east-1",
"public_endpoint": "redis-12345-us-east-1.cloud.redislabs.com:12345",
"private_endpoint": "redis-12345-us-east-1.internal.cloud.redislabs.com:12345"
},
{
"region": "eu-west-1",
"public_endpoint": "redis-12345-eu-west-1.cloud.redislabs.com:12346",
"private_endpoint": "redis-12345-eu-west-1.internal.cloud.redislabs.com:12346"
}
]
}
5. Configure Applications
Connect each application to its nearest region:
US Application:
import redis
r = redis.Redis(
host='redis-12345-us-east-1.cloud.redislabs.com',
port=12345,
password='your-password',
decode_responses=True
)
EU Application:
r = redis.Redis(
host='redis-12345-eu-west-1.cloud.redislabs.com',
port=12346,
password='your-password',
decode_responses=True
)
Network Connectivity
Setup VPC Peering for Each Region
# US East peering
redisctl cloud connectivity vpc-peering create-aa \
--subscription 42 \
--region-id 1 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-us-east-abc",
"vpc_cidr": "172.31.0.0/16",
"region": "us-east-1"
}' \
--wait
# EU West peering
redisctl cloud connectivity vpc-peering create-aa \
--subscription 42 \
--region-id 2 \
--data '{
"provider_name": "AWS",
"aws_account_id": "123456789012",
"vpc_id": "vpc-eu-west-xyz",
"vpc_cidr": "172.32.0.0/16",
"region": "eu-west-1"
}' \
--wait
Conflict Resolution
Active-Active uses automatic conflict resolution with LWW (Last-Write-Wins):
Understanding Conflicts
# Example: Counter increment in both regions simultaneously
# US: INCR counter (value becomes 1)
# EU: INCR counter (value becomes 1)
# After sync: counter = 2 (both increments applied)
Conflict-Free Data Types
Use Redis data types that resolve conflicts automatically:
- Counters - INCR/DECR (additive)
- Sets - SADD/SREM (union)
- Sorted Sets - ZADD (merge by score)
- Hashes - HSET (field-level LWW)
Best Practices
# Good: Using counters
redis.incr('page:views')
# Good: Using sets
redis.sadd('user:tags', 'premium')
# Caution: Simple strings (LWW conflicts)
redis.set('user:status', 'active') # May conflict with other region
Monitoring Active-Active
Check Replication Lag
# Get replication status for all regions
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-o json \
-q 'regions[].{
region: region,
replication_lag: replication_lag_ms,
status: status
}'
Monitor Sync Traffic
# Check inter-region bandwidth usage
redisctl cloud subscription get \
--subscription 42 \
-q 'deployment.regions[].{
region: region,
sync_traffic_gb: sync_traffic_gb_per_month
}'
Scaling Active-Active
Add Region to Existing Database
# Add new region to subscription
redisctl cloud subscription update \
--subscription 42 \
--data '{
"add_regions": [
{
"region": "ap-southeast-1",
"networking": {
"cidr": "10.3.0.0/24"
}
}
]
}' \
--wait
# Database automatically extends to new region
Remove Region
redisctl cloud subscription update \
--subscription 42 \
--data '{
"remove_regions": ["ap-southeast-1"]
}' \
--wait
Disaster Recovery
Regional Failover
If a region becomes unavailable:
- Applications automatically retry to local endpoint
- Update application config to use different region
- Data remains consistent across remaining regions
# Check region health
redisctl cloud database get \
--subscription 42 \
--database-id 12345 \
-q 'regions[].{region: region, status: status}'
# Update application to use healthy region
# No data loss - all writes in healthy regions preserved
Cost Optimization
Monitor Inter-Region Traffic
# Check sync costs
redisctl cloud subscription get \
--subscription 42 \
-o json \
-q '{
monthly_sync_gb: (deployment.regions | map(&sync_traffic_gb_per_month, @) | sum(@)),
monthly_cost_estimate: monthly_cost
}'
Optimize for Read-Heavy Workloads
# Use read replicas in regions with heavy reads
redisctl cloud database update \
--subscription 42 \
--database-id 12345 \
--data '{
"replication": true,
"replica_count": 2
}' \
--wait
Common Patterns
Session Store
# Store sessions in nearest region
def store_session(session_id, data):
redis.hset(f'session:{session_id}', mapping=data)
redis.expire(f'session:{session_id}', 86400) # 24 hours
# Read from any region
def get_session(session_id):
return redis.hgetall(f'session:{session_id}')
Global Rate Limiting
# Distributed rate limit across regions
def check_rate_limit(user_id, limit=100):
key = f'rate:limit:{user_id}:{int(time.time() / 60)}'
count = redis.incr(key)
redis.expire(key, 120)
return count <= limit
Leaderboards
# Global leaderboard
def update_score(user_id, score):
redis.zadd('leaderboard:global', {user_id: score})
def get_top_players(n=10):
return redis.zrevrange('leaderboard:global', 0, n-1, withscores=True)
Common Issues
High Replication Lag
# Check network connectivity between regions
# Increase bandwidth allocation
redisctl cloud subscription update \
--subscription 42 \
--data '{"bandwidth_gb_per_month": 500}' \
--wait
Conflict Resolution Issues
Solution: Design data model for conflict-free types:
- Use INCR instead of SET for counters
- Use SADD instead of SET for collections
- Use HSET for field-level updates instead of full object replacement
Region Addition Takes Long Time
Solution: Adding regions requires data sync. For large databases:
- Expect 1-2 hours for initial sync
- Monitor with
--wait-timeout 7200
Production Best Practices
- Design for Conflicts - Use conflict-free data types
- Local Writes - Always write to nearest region
- Monitor Lag - Alert on high replication lag
- Test Failover - Regularly test regional failures
- Plan CIDRs - Use non-overlapping CIDR blocks
- Optimize Bandwidth - Monitor inter-region traffic costs
Next Steps
- Setup VPC Peering - Private connectivity per region
- Configure ACLs - Secure all regional endpoints
- Monitor Performance - Track per-region metrics
- Backup and Restore - Multi-region backup strategy
See Also
Create Your First Redis Enterprise Database
⏱️ Time: 5 minutes
📋 Prerequisites:
- Redis Enterprise cluster running (see cluster setup)
- redisctl installed (installation guide)
- Profile configured with Enterprise credentials (authentication guide)
Quick Command
Create a basic database with one command:
redisctl enterprise database create \
--data '{"name": "my-first-db", "memory_size": 1073741824}' \
--wait
Step-by-Step Guide
1. Verify Cluster Connection
Check that redisctl can connect to your cluster:
redisctl enterprise cluster get -o json -q 'name'
What you should see:
"cluster1.local"
Troubleshooting:
- ❌ "Connection refused" → Check
REDIS_ENTERPRISE_URLor profile settings - ❌ "401 Unauthorized" → Verify credentials with
redisctl profile get - ❌ "SSL error" → Add
--insecureflag or setREDIS_ENTERPRISE_INSECURE=true
2. Check Available Resources
See what resources are available:
redisctl enterprise cluster get -o json -q '{
shards_limit: shards_limit,
shards_used: shards_used,
memory_size: memory_size
}'
Example output:
{
"shards_limit": 100,
"shards_used": 5,
"memory_size": 107374182400
}
3. Create the Database
Minimum configuration (1GB database):
redisctl enterprise database create \
--data '{
"name": "my-first-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12000
}' \
--wait
Common options:
memory_size: Bytes (1073741824 = 1GB, 10737418240 = 10GB)type:redisormemcachedport: Must be unique on cluster (12000-19999 typical range)replication:truefor high availabilitysharding:truefor clustering across shards
What you should see:
{
"uid": 1,
"name": "my-first-db",
"status": "active",
"port": 12000,
"memory_size": 1073741824,
"endpoint": "redis-12000.cluster1.local"
}
4. Get Connection Details
Retrieve your database endpoint and authentication:
redisctl enterprise database get --database-id 1 -o json -q '{
endpoint: dns_address_master,
port: port,
password: authentication_redis_pass
}'
Output:
{
"endpoint": "redis-12000.cluster1.local",
"port": 12000,
"password": "your-password-here"
}
5. Test Connection
Using redis-cli:
redis-cli -h redis-12000.cluster1.local \
-p 12000 \
-a your-password-here \
PING
Expected response: PONG
Advanced Configuration
High Availability Database
Create a replicated database with automatic failover:
redisctl enterprise database create \
--data '{
"name": "ha-database",
"memory_size": 10737418240,
"type": "redis",
"port": 12001,
"replication": true,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec"
}' \
--wait
Clustered Database
Create a sharded database for scaling:
redisctl enterprise database create \
--data '{
"name": "clustered-db",
"memory_size": 53687091200,
"type": "redis",
"port": 12002,
"sharding": true,
"shards_count": 5,
"oss_cluster": true
}' \
--wait
Using a Configuration File
For complex setups:
# Create database-config.json
cat > database-config.json << 'EOF'
{
"name": "production-db",
"memory_size": 21474836480,
"type": "redis",
"port": 12003,
"replication": true,
"sharding": true,
"shards_count": 3,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec",
"eviction_policy": "volatile-lru",
"oss_cluster": true,
"authentication_redis_pass": "my-secure-password"
}
EOF
redisctl enterprise database create \
--data @database-config.json \
--wait
Common Issues
Port Already in Use
Error: Port 12000 is already allocated
Solution: Use a different port or check existing databases:
redisctl enterprise database list -o json -q '[].port'
Insufficient Cluster Resources
Error: Not enough memory available
Solution: Check cluster capacity:
redisctl enterprise cluster get -q '{available_memory: (memory_size - memory_used)}'
Database Stuck in "pending"
Status: pending
Solution: Check cluster node status:
redisctl enterprise node list -o table
All nodes should show online status. If not, investigate node issues first.
Memory Size Reference
Quick conversion table:
| Description | Bytes | Human |
|---|---|---|
| 100 MB | 104857600 | 0.1 GB |
| 500 MB | 524288000 | 0.5 GB |
| 1 GB | 1073741824 | 1 GB |
| 5 GB | 5368709120 | 5 GB |
| 10 GB | 10737418240 | 10 GB |
| 50 GB | 53687091200 | 50 GB |
| 100 GB | 107374182400 | 100 GB |
Or use shell arithmetic expansion for 1GB in bash/zsh
Next Steps
Now that you have a database:
- 🔒 Configure Redis ACLs - Secure your database with access controls
- 💾 Generate Support Package - Troubleshooting and diagnostics
- 🔄 Configure Replication - Set up replica databases
- 📊 Monitor Database Health - Track performance metrics
See Also
- Enterprise Database Command Reference - Complete command documentation
- Database Configuration Options - All configuration parameters
- Redis Enterprise Documentation - Official Redis Enterprise docs
Generate and Upload a Support Package
⏱️ Time: 10-15 minutes
📋 Prerequisites:
- Redis Enterprise cluster running
- redisctl installed and configured
- (Optional) Files.com account for upload (sign up)
Quick Command
Generate support package for entire cluster:
redisctl enterprise support-package cluster \
--file /tmp/support-package.tar.gz
What is a Support Package?
A support package is a comprehensive diagnostic bundle containing:
- Cluster configuration and logs
- Database configurations and statistics
- Node health and metrics
- Network configuration
- Redis server logs
Used for troubleshooting with Redis support or internal diagnostics.
Step-by-Step Guide
1. Generate Basic Support Package
Create a support package for the entire cluster:
redisctl enterprise support-package cluster \
--file /tmp/cluster-support-$(date +%Y%m%d).tar.gz
What you should see:
Generating support package...
Support package saved to: /tmp/cluster-support-20251007.tar.gz
Size: 45.2 MB
2. Generate for Specific Database
Create a package for just one database (smaller, faster):
redisctl enterprise support-package database \
--database-id 1 \
--file /tmp/db1-support.tar.gz
3. Optimize Before Upload
Reduce package size for faster upload:
redisctl enterprise support-package database \
--database-id 1 \
--optimize \
--file /tmp/db1-optimized.tar.gz
What --optimize does:
- Compresses logs more aggressively
- Excludes large binary dumps
- Typically 50-70% smaller
- Still contains all diagnostic info
4. Upload to Files.com
One-Time Setup
Set up your Files.com API key:
# Store securely in keyring (recommended)
redisctl files-key set --use-keyring
# Or set as environment variable
export FILES_API_KEY="your-api-key"
Generate and Upload
Create package and upload in one command:
redisctl enterprise support-package database \
--database-id 1 \
--optimize \
--upload \
--no-save
Flags explained:
--upload: Upload to Files.com after generation--no-save: Don't save locally (only upload)--optimize: Reduce size before upload
What you should see:
Generating support package...
Optimizing package...
Uploading to Files.com...
✓ Uploaded: /support-packages/db1-20251007-abc123.tar.gz
URL: https://yourcompany.files.com/file/support-packages/db1-20251007-abc123.tar.gz
Advanced Usage
Generate with Custom Filters
Exclude certain log types:
redisctl enterprise support-package database \
--database-id 1 \
--file /tmp/filtered-support.tar.gz
Automated Uploads
Schedule regular support package uploads:
#!/bin/bash
# upload-support-package.sh
DATE=$(date +%Y%m%d-%H%M%S)
DB_ID=$1
redisctl enterprise support-package database \
--database-id "$DB_ID" \
--optimize \
--upload \
--no-save \
-o json | tee /var/log/support-upload-$DATE.log
Run via cron:
# Daily at 2 AM for database 1
0 2 * * * /usr/local/bin/upload-support-package.sh 1
Share with Redis Support
Generate and get sharable link:
RESULT=$(redisctl enterprise support-package cluster \
--optimize \
--upload \
--no-save \
-o json)
URL=$(redisctl enterprise support-package cluster -q 'upload_url')
echo "Share this URL with Redis Support:"
echo "$URL"
Common Issues
Package Generation Times Out
Error: Support package generation timed out
Solution: Use optimize flag to reduce generation time:
redisctl enterprise support-package cluster \
--optimize \
--file /tmp/support.tar.gz
Upload Fails
Error: Failed to upload to Files.com: 401 Unauthorized
Solution: Verify API key:
# Check current configuration
redisctl files-key get
# Re-enter API key
redisctl files-key set --use-keyring
Insufficient Disk Space
Error: Not enough disk space
Solution: Use --optimize or clean up old packages:
# Find old packages
find /tmp -name "*support*.tar.gz" -mtime +7
# Use optimization
redisctl enterprise support-package cluster \
--optimize \
--file /tmp/support.tar.gz
Database Not Found
Error: Database with ID 999 not found
Solution: List available databases:
redisctl enterprise database list -o table -q '[].{id: uid, name: name}'
Package Size Reference
Typical sizes (uncompressed / compressed):
| Scope | Uncompressed | Compressed | Optimized |
|---|---|---|---|
| Single small DB | 100-200 MB | 40-80 MB | 15-30 MB |
| Single large DB | 500 MB-2 GB | 200-800 MB | 50-200 MB |
| Entire cluster | 1-10 GB | 500 MB-3 GB | 200 MB-1 GB |
What's Inside?
A support package typically contains:
support-package/
├── cluster/
│ ├── cluster-config.json
│ ├── cluster-logs/
│ └── cluster-stats.json
├── databases/
│ ├── db-1/
│ │ ├── config.json
│ │ ├── stats.json
│ │ └── redis-logs/
│ └── db-2/...
├── nodes/
│ ├── node-1/
│ │ ├── system-info.json
│ │ ├── network-config.json
│ │ └── logs/
│ └── node-2/...
└── metadata.json
Next Steps
- 📊 Monitor Cluster Health - Proactive monitoring
- 🔍 Troubleshooting Guide - Common issues and solutions
- 🛠️ Node Management - Manage cluster nodes
See Also
- Support Package Command Reference - Complete command documentation
- Files.com Integration Guide - API key management
- Redis Enterprise Support - Contact Redis support
Cluster Health Check
Time: 5 minutes
Prerequisites:
- Redis Enterprise cluster running
- redisctl configured with Enterprise credentials
Quick Health Check
# Get cluster overview
redisctl enterprise cluster get -o json -q '{
name: name,
nodes: nodes_count,
shards: shards_count,
databases: databases_count,
status: cluster_state
}'
# Check all nodes
redisctl enterprise node list -o table -q '[].{
id: uid,
addr: addr,
role: role,
status: status,
cores: cores,
memory_available: (total_memory - provisional_memory - used_memory)
}'
Detailed Health Checks
1. Cluster Status
redisctl enterprise cluster get -o json -q '{
state: cluster_state,
license_state: license_state,
quorum: quorum_only,
shards: {used: shards_count, limit: shards_limit},
memory: {used: memory_size, available: ephemeral_storage_size}
}'
2. Node Health
# Check each node status
redisctl enterprise node list -o json -q '[].{
node: uid,
status: status,
uptime: uptime,
cpu: cpu_idle,
memory_used: (used_memory / total_memory * 100),
disk_used: (ephemeral_storage_used / ephemeral_storage_size * 100)
}'
3. Database Health
# List all databases with key metrics
redisctl enterprise database list -o json -q '[].{
db: uid,
name: name,
status: status,
memory: memory_size,
shards: shards_count,
ops_sec: total_req
}'
4. Alert Status
# Check cluster alerts
redisctl enterprise cluster alerts -o json -q '{
enabled: alerts_settings.enabled,
active_alerts: alerts[?state==`active`].name
}'
# Check node alerts
redisctl enterprise node alerts -o table
Automated Health Monitoring
#!/bin/bash
# cluster-health-check.sh
echo "Redis Enterprise Cluster Health Check"
echo "======================================"
# Cluster state
echo "Cluster Status:"
redisctl enterprise cluster get -q 'cluster_state'
# Node count and status
NODES=$(redisctl enterprise node list -o json -q 'length([])')
HEALTHY_NODES=$(redisctl enterprise node list -o json -q '[?status==`active`] | length([])')
echo "Nodes: $HEALTHY_NODES/$NODES healthy"
# Database status
DBS=$(redisctl enterprise database list -o json -q 'length([])')
ACTIVE_DBS=$(redisctl enterprise database list -o json -q '[?status==`active`] | length([])')
echo "Databases: $ACTIVE_DBS/$DBS active"
# Resource usage
SHARD_USAGE=$(redisctl enterprise cluster get -o json -q '((shards_count / shards_limit * 100) | floor)')
MEMORY_USAGE=$(redisctl enterprise cluster get -o json -q '((memory_size / ephemeral_storage_size * 100) | floor)')
echo "Resource Usage: Shards $SHARD_USAGE%, Memory $MEMORY_USAGE%"
# Exit code based on health
if [ "$HEALTHY_NODES" -eq "$NODES" ] && [ "$ACTIVE_DBS" -eq "$DBS" ]; then
echo "Status: HEALTHY"
exit 0
else
echo "Status: DEGRADED"
exit 1
fi
Next Steps
- Node Management - Manage cluster nodes
- Database Monitoring - Track database metrics
- Generate Support Package - Troubleshooting tools
See Also
- Cluster Command Reference
- Node Command Reference
Node Management
Time: 10-15 minutes
Prerequisites:
- Redis Enterprise cluster with multiple nodes
- redisctl configured with Enterprise credentials
- Admin access to cluster
Quick Commands
# List all nodes
redisctl enterprise node list -o table
# Get specific node details
redisctl enterprise node get --node-id 1 -o json
# Check node status
redisctl enterprise node list -q '[].{id: uid, status: status, role: role}'
Node Operations
1. View Node Details
# Get comprehensive node info
redisctl enterprise node get --node-id 1 -o json -q '{
uid: uid,
addr: addr,
status: status,
role: role,
cores: cores,
memory: {
total: total_memory,
used: used_memory,
available: (total_memory - used_memory)
},
storage: {
total: ephemeral_storage_size,
used: ephemeral_storage_used,
available: ephemeral_storage_avail
},
uptime: uptime,
version: software_version
}'
2. Add Node to Cluster
# Prepare new node (run on new node)
curl -k https://localhost:9443/v1/bootstrap/create_cluster \
-H "Content-Type: application/json" \
-d '{
"action": "join_cluster",
"cluster": {
"nodes": ["10.0.1.10:9443"],
"username": "admin@cluster.local",
"password": "admin-password"
}
}'
# Verify node joined
redisctl enterprise node list -o table
3. Remove Node from Cluster
# First, ensure no databases are on this node
redisctl enterprise database list -o json -q '[?node_uid==`3`]'
# Remove node
redisctl enterprise node delete --node-id 3 --wait
4. Update Node Configuration
redisctl enterprise node update \
--node-id 1 \
--data '{
"max_listeners": 100,
"max_redis_servers": 50
}'
Node Maintenance
Enable Maintenance Mode
# Put node in maintenance mode (no new shards)
redisctl enterprise node update \
--node-id 2 \
--data '{"accept_servers": false}'
# Verify
redisctl enterprise node get --node-id 2 -q 'accept_servers'
Drain Node
Move all shards off a node before maintenance:
#!/bin/bash
NODE_ID=2
# Get all shards on this node
SHARDS=$(redisctl enterprise shard list \
--node $NODE_ID \
-o json \
-q '[].uid')
# Migrate each shard to another node
for shard in $SHARDS; do
echo "Migrating shard $shard..."
redisctl enterprise shard migrate \
--uid $shard \
--target-node 1 \
--force
done
echo "Node $NODE_ID drained"
Check Node Resources
redisctl enterprise node get --node-id 1 -o json -q '{
cpu_idle: cpu_idle,
memory_free_pct: ((total_memory - used_memory) / total_memory * 100 | floor),
disk_free_pct: (ephemeral_storage_avail / ephemeral_storage_size * 100 | floor),
connections: conns,
shards: shard_count
}'
Monitoring Nodes
Node Health Script
#!/bin/bash
# node-health.sh
echo "Node Health Report"
echo "=================="
# Using JMESPath sprintf() for formatted string output
redisctl enterprise node list -q '
[].sprintf(
`"Node %s: %s - CPU: %.0f%% idle, Memory: %.0f%% used, Shards: %d"`,
uid, status, cpu_idle,
multiply(divide(used_memory, total_memory), `100`),
shard_count
)' --raw
# Alternative: Use JMESPath for structured table output
redisctl enterprise node list -q '[].{node: uid, status: status, cpu_idle: cpu_idle, shards: shard_count}' -o table
Resource Alerts
# Check for nodes with high resource usage
redisctl enterprise node list -o json -q '
[?
(used_memory / total_memory * 100) > 80 ||
(ephemeral_storage_used / ephemeral_storage_size * 100) > 85 ||
cpu_idle < 20
].{
node: uid,
memory_pct: (used_memory / total_memory * 100 | floor),
disk_pct: (ephemeral_storage_used / ephemeral_storage_size * 100 | floor),
cpu_idle: cpu_idle
}
'
Node Failover
Check Quorum
# Ensure cluster has quorum before operations
redisctl enterprise cluster get -q '{
quorum: quorum_only,
nodes: nodes_count,
required: ((nodes_count / 2 | floor) + 1)
}'
Handle Failed Node
# Identify failed node
redisctl enterprise node list -q '[?status!=`active`].{id: uid, status: status}'
# Check affected databases
redisctl enterprise database list -o json -q '[?node_uid==`3`].{db: uid, name: name}'
# Trigger failover for affected databases
redisctl enterprise database update \
--database-id 1 \
--data '{"action": "failover"}'
Common Issues
Node Not Responding
# Check node connectivity
curl -k https://node-ip:9443/v1/cluster
# Check from another node
redisctl enterprise node get --node-id 2 -q 'status'
High Memory Usage
# Find databases using most memory on node
redisctl enterprise database list -o json -q '
[?node_uid==`1`] |
sort_by(@, &memory_size) |
reverse(@) |
[].{db: uid, name: name, memory_gb: (memory_size / 1073741824)}
'
Best Practices
- Always maintain quorum - Keep odd number of nodes
- Monitor resources - Set up alerts for CPU, memory, disk
- Regular health checks - Automated monitoring
- Graceful operations - Drain nodes before maintenance
- Plan capacity - Add nodes before reaching limits
Next Steps
- Cluster Health Check - Monitor overall cluster health
- Generate Support Package - Troubleshooting tools
- Database Management - Manage databases
See Also
- Node Command Reference
- Shard Management
Configure Database Replication
Time: 10-15 minutes
Prerequisites:
- Redis Enterprise cluster with multiple nodes
- redisctl configured with Enterprise credentials
- Database already created
What is Replication?
Replication provides:
- High availability - automatic failover if master fails
- Read scalability - distribute reads across replicas
- Data durability - multiple copies of data
Quick Setup
# Enable replication on existing database
redisctl enterprise database update \
--database-id 1 \
--data '{
"replication": true,
"shards_count": 2
}' \
--wait
Step-by-Step Setup
1. Create Database with Replication
redisctl enterprise database create \
--data '{
"name": "replicated-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12000,
"replication": true,
"shards_count": 1,
"sharding": false
}' \
--wait
2. Verify Replication Status
redisctl enterprise database get \
--database-id 1 \
-o json \
-q '{
name: name,
replication: replication,
shards_count: shards_count,
endpoints: endpoints
}'
3. Check Shard Distribution
redisctl enterprise shard list-by-database \
--bdb-uid 1 \
-o json \
-q '[].{
uid: uid,
role: role,
node: node_uid,
status: status
}'
Expected: One master and one replica shard on different nodes.
Replication Topology
Single Master with Replica
# Default configuration
{
"replication": true,
"shards_count": 1,
"sharding": false
}
# Result: 1 master + 1 replica = 2 total shards
Sharded with Replication
# Clustered database with replication
{
"replication": true,
"shards_count": 3,
"sharding": true
}
# Result: 3 master + 3 replica = 6 total shards
Advanced Configuration
Set Replica Count
# Multiple replicas per master
redisctl enterprise database update \
--database-id 1 \
--data '{
"replication": true,
"replica_sources": [
{"replica_source_name": "replica1", "replica_source_type": "replica"},
{"replica_source_name": "replica2", "replica_source_type": "replica"}
]
}' \
--wait
Rack Awareness
Ensure master and replicas are on different racks/zones:
redisctl enterprise database update \
--database-id 1 \
--data '{
"rack_aware": true
}' \
--wait
Monitoring Replication
Check Replication Lag
# Get replication lag for database
redis-cli -h localhost -p 12000 INFO replication
# Or via REST API
redisctl enterprise database get \
--database-id 1 \
-q 'replica_sync[].{
replica: replica_uid,
lag: lag,
status: status
}'
Monitor Sync Status
# Check if replicas are in sync
redisctl enterprise shard list-by-database \
--bdb-uid 1 \
-o json \
-q '[?role==`replica`].{
shard: uid,
status: status,
sync_status: sync_status
}'
Failover Operations
Manual Failover
# Failover specific shard
redisctl enterprise shard failover \
--uid 1:1 \
--force
# Verify new master
redisctl enterprise shard get --uid 1:1 -q 'role'
Automatic Failover
Enabled by default with replication:
# Check failover settings
redisctl enterprise database get \
--database-id 1 \
-q '{
replication: replication,
watchdog_profile: watchdog_profile
}'
Replica Configuration
Read-Only Replicas
# Configure replica as read-only (default)
redisctl enterprise database update \
--database-id 1 \
--data '{
"replica_of": {
"endpoints": ["master-db:12000"],
"readonly": true
}
}' \
--wait
External Replication Source
Replicate from external Redis:
redisctl enterprise database create \
--data '{
"name": "replica-db",
"memory_size": 1073741824,
"type": "redis",
"port": 12001,
"replica_of": {
"endpoints": ["external-redis.example.com:6379"],
"authentication_redis_pass": "source-password"
}
}' \
--wait
Replication Performance
Optimize Replication Speed
# Increase replication buffer
redisctl enterprise database update \
--database-id 1 \
--data '{
"repl_backlog_size": 104857600 # 100MB
}' \
--wait
Monitor Replication Traffic
redisctl enterprise database get \
--database-id 1 \
-o json \
-q '{
replication_traffic: repl_traffic,
backlog_size: repl_backlog_size
}'
Common Patterns
High Availability Setup
# Production-ready HA configuration
redisctl enterprise database create \
--data '{
"name": "ha-database",
"memory_size": 10737418240,
"type": "redis",
"port": 12000,
"replication": true,
"shards_count": 3,
"sharding": true,
"rack_aware": true,
"data_persistence": "aof",
"aof_policy": "appendfsync-every-sec"
}' \
--wait
Read Scaling with Replicas
# Application pattern: writes to master, reads from replicas
from redis import Redis, Sentinel
# Connect to master for writes
master = Redis(host='master-endpoint', port=12000)
master.set('key', 'value')
# Connect to replica for reads
replica = Redis(host='replica-endpoint', port=12001)
value = replica.get('key')
Disaster Recovery
Backup Replication Status
# Save replication configuration
redisctl enterprise database get \
--database-id 1 \
-o json > db-replication-config.json
Restore After Failure
# Recreate database with same configuration
redisctl enterprise database create \
--data @db-replication-config.json \
--wait
Common Issues
Replication Lag Increasing
# Check network between nodes
redisctl enterprise node list -o table
# Check shard placement
redisctl enterprise shard list-by-database --bdb-uid 1 -o table
# Consider adding more replicas or increasing bandwidth
Replica Out of Sync
# Force resync
redisctl enterprise shard failover --uid 1:2 --force
# Check sync status
redisctl enterprise shard get --uid 1:2 -q 'sync_status'
Split Brain Scenario
Prevention:
- Always use odd number of cluster nodes
- Enable rack awareness
- Monitor node connectivity
Recovery:
# Identify correct master
redisctl enterprise shard list-by-database --bdb-uid 1 \
-q '[?role==`master`]'
# Force failover if needed
redisctl enterprise database update \
--database-id 1 \
--data '{"action": "recover"}' \
--wait
Best Practices
- Always Enable for Production - Replication is critical for HA
- Use Rack Awareness - Distribute across failure domains
- Monitor Replication Lag - Alert on high lag
- Test Failover - Regularly test automatic failover
- Plan Capacity - Replicas consume same resources as master
- Persist Configuration - Backup replication settings
Next Steps
- Cluster Health Check - Monitor replication health
- Node Management - Manage replica placement
- Generate Support Package - Troubleshooting tools
- Create Database - Database configuration basics
See Also
- Database Replication Reference
- Shard Management
- Redis Replication
Configure Redis ACLs
Time: 10 minutes
Prerequisites:
- Redis Enterprise cluster (v6.0+)
- redisctl configured with Enterprise credentials
- Understanding of Redis ACL syntax
Quick Setup
# Create ACL with read-only access
redisctl enterprise redis-acl create \
--data '{
"name": "readonly",
"acl": "+@read ~*"
}' \
--wait
# Apply to database
redisctl enterprise database update \
--database-id 1 \
--data '{
"redis_acls": [{"name": "readonly"}]
}' \
--wait
Redis ACL Syntax
Command Permissions
+@read # Allow all read commands
+@write # Allow all write commands
+@admin # Allow admin commands
-@dangerous # Deny dangerous commands
+get +set # Allow specific commands
-flushdb # Deny specific command
Key Patterns
~* # All keys
~cache:* # Keys starting with "cache:"
~user:123:* # Specific user keys
~* ~-secret:* # All except "secret:" prefix
Creating ACL Rules
Basic ACL Rules
# Read-only access
redisctl enterprise redis-acl create \
--data '{
"name": "readonly",
"acl": "+@read ~*"
}'
# Write to specific keys
redisctl enterprise redis-acl create \
--data '{
"name": "cache-writer",
"acl": "+@write +@read ~cache:*"
}'
# Admin without dangerous commands
redisctl enterprise redis-acl create \
--data '{
"name": "safe-admin",
"acl": "+@all -@dangerous ~*"
}'
Apply ACLs to Database
redisctl enterprise database update \
--database-id 1 \
--data '{
"redis_acls": [
{"name": "readonly", "password": "ReadPass123!"},
{"name": "cache-writer", "password": "WritePass456!"}
]
}' \
--wait
Testing ACLs
# Test readonly user
redis-cli -h localhost -p 12000 \
--user readonly \
--pass ReadPass123! \
GET mykey # Works
redis-cli --user readonly --pass ReadPass123! \
SET mykey value # Fails with NOPERM
# Test cache-writer user
redis-cli --user cache-writer --pass WritePass456! \
SET cache:item value # Works
redis-cli --user cache-writer --pass WritePass456! \
SET other:item value # Fails
Common ACL Patterns
Application Access Tiers
# Level 1: Read-only
redisctl enterprise redis-acl create \
--data '{"name": "app-read", "acl": "+@read +ping ~*"}'
# Level 2: Read + Write cache
redisctl enterprise redis-acl create \
--data '{"name": "app-cache", "acl": "+@read +@write ~cache:* ~session:*"}'
# Level 3: Full access
redisctl enterprise redis-acl create \
--data '{"name": "app-admin", "acl": "+@all -flushdb -flushall ~*"}'
Multi-Tenant Isolation
# Tenant A
redisctl enterprise redis-acl create \
--data '{"name": "tenant-a", "acl": "+@all ~tenant:a:*"}'
# Tenant B
redisctl enterprise redis-acl create \
--data '{"name": "tenant-b", "acl": "+@all ~tenant:b:*"}'
Managing ACLs
List ACLs
redisctl enterprise redis-acl list -o table
Update ACL
redisctl enterprise redis-acl update \
--acl-id 123 \
--data '{
"name": "readonly",
"acl": "+@read +@connection ~*"
}'
Delete ACL
redisctl enterprise redis-acl delete --acl-id 123
Best Practices
- Principle of Least Privilege - Grant minimum required access
- Use Key Prefixes - Design schema for ACL isolation
- Deny Dangerous Commands - Always exclude FLUSHDB, KEYS, etc.
- Strong Passwords - Use secure passwords for each ACL
- Test Thoroughly - Verify ACLs before production use
- Document ACLs - Maintain clear documentation of each rule
Next Steps
- Create Database - Database setup
- Configure Replication - High availability
- Cluster Health Check - Monitoring
See Also
Environment Variables
Complete reference of environment variables supported by redisctl.
Redis Cloud
| Variable | Description | Example |
|---|---|---|
REDIS_CLOUD_API_KEY | API account key | A3qcymrvqpn9rr... |
REDIS_CLOUD_API_SECRET | API secret key | S3s8ecrrnaguqk... |
REDIS_CLOUD_API_URL | API endpoint (optional) | https://api.redislabs.com/v1 |
Redis Enterprise
| Variable | Description | Example |
|---|---|---|
REDIS_ENTERPRISE_URL | Cluster API URL | https://cluster:9443 |
REDIS_ENTERPRISE_USER | Username | admin@cluster.local |
REDIS_ENTERPRISE_PASSWORD | Password | your-password |
REDIS_ENTERPRISE_INSECURE | Allow self-signed certs | true or false |
General
| Variable | Description | Example |
|---|---|---|
REDISCTL_PROFILE | Default profile name | production |
REDISCTL_OUTPUT | Default output format | json, yaml, table |
RUST_LOG | Logging level | error, warn, info, debug |
NO_COLOR | Disable colored output | 1 or any value |
Usage Examples
Basic Setup
# Redis Cloud
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Redis Enterprise
export REDIS_ENTERPRISE_URL="https://localhost:9443"
export REDIS_ENTERPRISE_USER="admin@cluster.local"
export REDIS_ENTERPRISE_PASSWORD="password"
export REDIS_ENTERPRISE_INSECURE="true"
Debugging
# Enable debug logging
export RUST_LOG=debug
redisctl api cloud get /
# Trace specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
CI/CD
# GitHub Actions
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
Precedence
Environment variables are overridden by:
- Command-line flags (highest priority)
- Configuration file settings
But override:
- Default values (lowest priority)
Configuration File
Complete reference for the redisctl configuration file format and options.
File Location
The configuration file is stored at:
- Linux/macOS:
~/.config/redisctl/config.toml - Windows:
%APPDATA%\redis\redisctl\config.toml
View the exact path:
redisctl profile path
File Format
The configuration file uses TOML format:
# Default profile to use when none specified
default_profile = "production"
# Profile definitions
[profiles.production]
deployment_type = "cloud"
api_key = "your-api-key"
api_secret = "your-api-secret"
api_url = "https://api.redislabs.com/v1"
[profiles.enterprise-local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "your-password"
insecure = true
Profile Configuration
Cloud Profile
All available options for Redis Cloud profiles:
[profiles.cloud-example]
# Required: Deployment type
deployment_type = "cloud"
# Required: API credentials
api_key = "A3qcymrvqpn9rrgdt40sv5f9yfxob26vx64hwddh8vminqnkgfq"
api_secret = "S3s8ecrrnaguqkvwfvealoe3sn25zqs4wc4lwgo4rb0ud3qm77c"
# Optional: API endpoint (defaults to production)
api_url = "https://api.redislabs.com/v1"
# Optional: Custom timeout (seconds)
timeout = 30
# Optional: Retry configuration
max_retries = 3
retry_delay = 1
Enterprise Profile
All available options for Redis Enterprise profiles:
[profiles.enterprise-example]
# Required: Deployment type
deployment_type = "enterprise"
# Required: Cluster URL
url = "https://cluster.example.com:9443"
# Required: Authentication
username = "admin@example.com"
password = "secure-password"
# Optional: Allow self-signed certificates
insecure = false
# Optional: Custom timeout (seconds)
timeout = 60
# Optional: Client certificate authentication
client_cert = "/path/to/client.crt"
client_key = "/path/to/client.key"
# Optional: Custom CA certificate
ca_cert = "/path/to/ca.crt"
Environment Variable Expansion
The configuration file supports environment variable expansion using shell-style variable syntax:
Basic Expansion
[profiles.production]
deployment_type = "cloud"
api_key = "${REDIS_CLOUD_API_KEY}"
api_secret = "${REDIS_CLOUD_API_SECRET}"
With Default Values
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
# Use production URL if STAGING_API_URL not set
api_url = "${STAGING_API_URL:-https://api.redislabs.com/v1}"
Complex Example
default_profile = "${REDISCTL_DEFAULT_PROFILE:-development}"
[profiles.development]
deployment_type = "cloud"
api_key = "${DEV_API_KEY}"
api_secret = "${DEV_API_SECRET}"
api_url = "${DEV_API_URL:-https://api.redislabs.com/v1}"
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
[profiles."${DYNAMIC_PROFILE_NAME:-custom}"]
deployment_type = "${DYNAMIC_DEPLOYMENT:-cloud}"
api_key = "${DYNAMIC_API_KEY}"
api_secret = "${DYNAMIC_API_SECRET}"
Multiple Profiles
Organizing by Environment
# Development environments
[profiles.dev-cloud]
deployment_type = "cloud"
api_key = "${DEV_CLOUD_KEY}"
api_secret = "${DEV_CLOUD_SECRET}"
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "dev-admin"
password = "${DEV_ENTERPRISE_PASSWORD}"
insecure = true
# Staging environments
[profiles.staging-cloud]
deployment_type = "cloud"
api_key = "${STAGING_CLOUD_KEY}"
api_secret = "${STAGING_CLOUD_SECRET}"
[profiles.staging-enterprise]
deployment_type = "enterprise"
url = "https://staging-cluster:9443"
username = "staging-admin"
password = "${STAGING_ENTERPRISE_PASSWORD}"
# Production environments
[profiles.prod-cloud]
deployment_type = "cloud"
api_key = "${PROD_CLOUD_KEY}"
api_secret = "${PROD_CLOUD_SECRET}"
[profiles.prod-enterprise]
deployment_type = "enterprise"
url = "https://prod-cluster:9443"
username = "prod-admin"
password = "${PROD_ENTERPRISE_PASSWORD}"
Organizing by Region
[profiles.us-east-1]
deployment_type = "cloud"
api_key = "${US_EAST_API_KEY}"
api_secret = "${US_EAST_SECRET}"
[profiles.eu-west-1]
deployment_type = "cloud"
api_key = "${EU_WEST_API_KEY}"
api_secret = "${EU_WEST_SECRET}"
[profiles.ap-southeast-1]
deployment_type = "cloud"
api_key = "${APAC_API_KEY}"
api_secret = "${APAC_SECRET}"
Advanced Configuration
Team Shared Configuration
Create a shared base configuration:
# team-config.toml (checked into git)
[profiles.team-base]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
# Local overrides (not in git)
# ~/.config/redisctl/config.toml
[profiles.team]
deployment_type = "cloud"
api_url = "https://api.redislabs.com/v1"
api_key = "${MY_API_KEY}"
api_secret = "${MY_API_SECRET}"
CI/CD Configuration
# CI/CD specific profiles
[profiles.ci-test]
deployment_type = "cloud"
api_key = "${CI_TEST_API_KEY}"
api_secret = "${CI_TEST_API_SECRET}"
api_url = "${CI_API_URL:-https://api.redislabs.com/v1}"
[profiles.ci-deploy]
deployment_type = "enterprise"
url = "${CI_CLUSTER_URL}"
username = "${CI_USERNAME}"
password = "${CI_PASSWORD}"
insecure = true # CI uses self-signed certs
Security Considerations
File Permissions
Set restrictive permissions on the configuration file:
# Linux/macOS
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# Should show: -rw-------
Credential Storage Best Practices
-
Never commit credentials to version control
# .gitignore config.toml *.secret -
Use environment variables for sensitive data
[profiles.secure] deployment_type = "cloud" api_key = "${REDIS_API_KEY}" # Set in environment api_secret = "${REDIS_API_SECRET}" # Set in environment -
Integrate with secret managers
# Set environment variables from secret manager export REDIS_API_KEY=$(vault kv get -field=api_key secret/redis) export REDIS_API_SECRET=$(vault kv get -field=api_secret secret/redis)
Migration from Other Formats
From Environment Variables Only
If currently using only environment variables:
# Create profile from environment
redisctl profile set migrated \
--deployment cloud \
--api-key "$REDIS_CLOUD_API_KEY" \
--api-secret "$REDIS_CLOUD_API_SECRET"
From JSON Configuration
Convert JSON to TOML:
# old-config.json
{
"profiles": {
"production": {
"type": "cloud",
"apiKey": "key",
"apiSecret": "secret"
}
}
}
# Convert to config.toml
[profiles.production]
deployment_type = "cloud"
api_key = "key"
api_secret = "secret"
Validation
Check Configuration
# Validate profile configuration
redisctl profile show production
# Test authentication
redisctl auth test --profile production
# List all profiles
redisctl profile list
Common Issues
Invalid TOML syntax
# Wrong - missing quotes
[profiles.prod]
deployment_type = cloud # Should be "cloud"
# Correct
[profiles.prod]
deployment_type = "cloud"
Environment variable not found
# This will fail if MY_VAR is not set
api_key = "${MY_VAR}"
# Use default value to prevent failure
api_key = "${MY_VAR:-default-key}"
Profile name with special characters
# Use quotes for profile names with special characters
[profiles."prod-us-east-1"]
deployment_type = "cloud"
Backup and Recovery
Backup Configuration
# Backup current configuration
cp ~/.config/redisctl/config.toml ~/.config/redisctl/config.toml.backup
# Backup with timestamp
cp ~/.config/redisctl/config.toml \
~/.config/redisctl/config.toml.$(date +%Y%m%d_%H%M%S)
Restore Configuration
# Restore from backup
cp ~/.config/redisctl/config.toml.backup ~/.config/redisctl/config.toml
# Verify restoration
redisctl profile list
Example Configurations
Minimal Configuration
# Minimal working configuration
[profiles.default]
deployment_type = "cloud"
api_key = "your-key"
api_secret = "your-secret"
Full-Featured Configuration
# Complete example with all features
default_profile = "production"
# Production Cloud
[profiles.production]
deployment_type = "cloud"
api_key = "${PROD_API_KEY}"
api_secret = "${PROD_API_SECRET}"
api_url = "${PROD_API_URL:-https://api.redislabs.com/v1}"
# Staging Cloud with defaults
[profiles.staging]
deployment_type = "cloud"
api_key = "${STAGING_API_KEY}"
api_secret = "${STAGING_API_SECRET}"
api_url = "https://api.redislabs.com/v1"
# Development Enterprise
[profiles.dev-enterprise]
deployment_type = "enterprise"
url = "https://dev-cluster:9443"
username = "admin@dev.local"
password = "${DEV_PASSWORD}"
insecure = true
# DR Enterprise with client certs
[profiles.dr-enterprise]
deployment_type = "enterprise"
url = "https://dr-cluster:9443"
username = "admin@dr.local"
password = "${DR_PASSWORD}"
client_cert = "/etc/ssl/client.crt"
client_key = "/etc/ssl/client.key"
ca_cert = "/etc/ssl/ca.crt"
# Local testing
[profiles.local]
deployment_type = "enterprise"
url = "https://localhost:9443"
username = "admin@cluster.local"
password = "test123"
insecure = true
Shell Completions
redisctl supports tab completion for all major shells. This guide shows how to install and configure completions for your shell.
Generating Completions
First, generate the completion script for your shell:
# Bash
redisctl completions bash > redisctl.bash
# Zsh
redisctl completions zsh > _redisctl
# Fish
redisctl completions fish > redisctl.fish
# PowerShell
redisctl completions powershell > redisctl.ps1
# Elvish
redisctl completions elvish > redisctl.elv
Installing Completions
Bash
# Linux - User-specific
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
# Linux - System-wide (requires sudo)
sudo redisctl completions bash > /usr/share/bash-completion/completions/redisctl
# macOS with Homebrew
redisctl completions bash > $(brew --prefix)/etc/bash_completion.d/redisctl
# Reload your shell
source ~/.bashrc
# or start a new terminal
Zsh
# Add to your fpath (usually in ~/.zshrc)
echo 'fpath=(~/.zsh/completions $fpath)' >> ~/.zshrc
# Create directory if needed
mkdir -p ~/.zsh/completions
# Generate completion file
redisctl completions zsh > ~/.zsh/completions/_redisctl
# Reload your shell
source ~/.zshrc
# or start a new terminal
Fish
# Generate completion file
redisctl completions fish > ~/.config/fish/completions/redisctl.fish
# Completions are loaded automatically in new shells
# or reload current shell:
source ~/.config/fish/config.fish
PowerShell
# Add to your PowerShell profile
redisctl completions powershell >> $PROFILE
# Or save to a file and source it
redisctl completions powershell > redisctl.ps1
Add-Content $PROFILE ". $PWD\redisctl.ps1"
# Reload profile
. $PROFILE
Elvish
# Generate completion file
redisctl completions elvish > ~/.elvish/lib/redisctl.elv
# Add to rc.elv
echo "use redisctl" >> ~/.elvish/rc.elv
# Reload shell
exec elvish
Testing Completions
After installation, test that completions work:
# Type and press Tab
redisctl <Tab>
# Should show: api, auth, cloud, enterprise, profile, etc.
# Try sub-commands
redisctl cloud <Tab>
# Should show: database, subscription, user, etc.
# Try options
redisctl --<Tab>
# Should show: --help, --version, --profile, --output, etc.
Troubleshooting
Completions Not Working
-
Check shell configuration:
# Bash - verify completion is enabled echo $BASH_COMPLETION_COMPAT_DIR # Zsh - check fpath echo $fpath # Fish - check completion directory ls ~/.config/fish/completions/ -
Reload your shell:
# Option 1: Source config file source ~/.bashrc # or ~/.zshrc, etc. # Option 2: Start new shell exec $SHELL # Option 3: Open new terminal -
Verify file permissions:
# Check completion file exists and is readable ls -la ~/.local/share/bash-completion/completions/redisctl # or your shell's completion directory
Updating Completions
When updating redisctl, regenerate completions to get new commands:
# Example for Bash
redisctl completions bash > ~/.local/share/bash-completion/completions/redisctl
source ~/.bashrc
Custom Completion Directories
If using non-standard directories:
# Bash - add to .bashrc
source /path/to/redisctl.bash
# Zsh - add to .zshrc
fpath=(/path/to/completions $fpath)
autoload -U compinit && compinit
# Fish - add to config.fish
source /path/to/redisctl.fish
Tips
- Auto-update completions: Add completion generation to your dotfiles setup
- Multiple shells: Generate completions for all shells you use
- Container usage: Mount completion files when using Docker:
docker run -v ~/.local/share/bash-completion:/etc/bash_completion.d:ro ... - CI/CD: Include completion generation in your deployment scripts
See Also
- Installation Guide - Installing redisctl
- Configuration - Setting up profiles
- Quick Start - Getting started with redisctl
Security Best Practices
This guide covers security best practices for using redisctl in production environments.
Credential Storage
Storage Methods Comparison
| Method | Security Level | Use Case | Pros | Cons |
|---|---|---|---|---|
| OS Keyring | ⭐⭐⭐⭐⭐ High | Production | Encrypted by OS, Most secure | Requires secure-storage feature |
| Environment Variables | ⭐⭐⭐⭐ Good | CI/CD, Containers | No file storage, Easy rotation | Must be set each session |
| Config File (Plaintext) | ⭐⭐ Low | Development only | Simple setup | Credentials visible in file |
Using OS Keyring (Recommended for Production)
The most secure way to store credentials is using your operating system's keyring:
# Install with secure storage support
cargo install redisctl --features secure-storage
# Create secure profile
redisctl profile set production \
--deployment cloud \
--api-key "your-api-key" \
--api-secret "your-api-secret" \
--use-keyring
Platform Support
- macOS: Uses Keychain (automatic)
- Windows: Uses Credential Manager (automatic)
- Linux: Uses Secret Service (requires GNOME Keyring or KWallet)
How Keyring Storage Works
- Initial Setup: When you use
--use-keyring, credentials are stored in the OS keyring - Config Reference: The config file stores references like
keyring:production-api-key - Automatic Retrieval: redisctl automatically retrieves credentials from keyring when needed
- Secure Updates: Credentials can be updated without exposing them in files
Example config with keyring references:
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key" # Actual value in keyring
api_secret = "keyring:production-api-secret" # Actual value in keyring
api_url = "https://api.redislabs.com/v1" # Non-sensitive, plaintext
Environment Variables (CI/CD)
For automated environments, use environment variables:
# Set credentials
export REDIS_CLOUD_API_KEY="your-key"
export REDIS_CLOUD_API_SECRET="your-secret"
# Use in commands (overrides config)
redisctl cloud database list
# Or reference in config
cat > config.toml <<EOF
[profiles.ci]
deployment_type = "cloud"
api_key = "\${REDIS_CLOUD_API_KEY}"
api_secret = "\${REDIS_CLOUD_API_SECRET}"
EOF
GitHub Actions Example
- name: Deploy Database
env:
REDIS_CLOUD_API_KEY: ${{ secrets.REDIS_API_KEY }}
REDIS_CLOUD_API_SECRET: ${{ secrets.REDIS_API_SECRET }}
run: |
redisctl cloud database create \
--subscription 12345 \
--data @database.json \
--wait
File Permissions
Protect configuration files containing credentials:
# Restrict to owner only
chmod 600 ~/.config/redisctl/config.toml
# Verify permissions
ls -la ~/.config/redisctl/config.toml
# -rw------- 1 user user 1234 Jan 15 10:00 config.toml
Credential Rotation
Regular Rotation Schedule
- Generate new credentials in Redis Cloud/Enterprise console
- Update keyring with new credentials:
redisctl profile set production \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring - Test access with new credentials
- Revoke old credentials in console
Automated Rotation Script
#!/bin/bash
# rotate-credentials.sh
PROFILE="production"
NEW_KEY=$(generate-api-key) # Your key generation method
NEW_SECRET=$(generate-api-secret)
# Update credentials
redisctl profile set "$PROFILE" \
--api-key "$NEW_KEY" \
--api-secret "$NEW_SECRET" \
--use-keyring
# Test new credentials
if redisctl --profile "$PROFILE" cloud subscription list > /dev/null; then
echo "Credential rotation successful"
# Notify old credentials can be revoked
else
echo "Credential rotation failed"
exit 1
fi
Secure Development Practices
Never Commit Credentials
Add to .gitignore:
# Redis configuration
~/.config/redisctl/config.toml
.redisctl/
*.secret
*_credentials.toml
Use Git Hooks
Pre-commit hook to detect credentials:
#!/bin/bash
# .git/hooks/pre-commit
# Check for API keys
if git diff --cached | grep -E "api_key|api_secret|password" | grep -v "keyring:"; then
echo "ERROR: Potential credentials detected in commit"
echo "Use --use-keyring or environment variables instead"
exit 1
fi
Separate Development and Production
Use different profiles for each environment:
# Development (with keyring for safety)
[profiles.dev]
deployment_type = "cloud"
api_key = "keyring:dev-api-key"
api_secret = "keyring:dev-api-secret"
# Staging
[profiles.staging]
deployment_type = "cloud"
api_key = "keyring:staging-api-key"
api_secret = "keyring:staging-api-secret"
# Production
[profiles.production]
deployment_type = "cloud"
api_key = "keyring:production-api-key"
api_secret = "keyring:production-api-secret"
Audit and Monitoring
Profile Usage Audit
Monitor which profiles are being used:
# Enable debug logging
export RUST_LOG=debug
# Commands will log profile usage
redisctl --profile production cloud database list
# [DEBUG] Using Redis Cloud profile: production
Access Logging
Create wrapper script for audit logging:
#!/bin/bash
# /usr/local/bin/redisctl-audit
# Log command execution
echo "[$(date)] User: $USER, Command: redisctl $*" >> /var/log/redisctl-audit.log
# Execute actual command
exec /usr/local/bin/redisctl "$@"
Credential Access Monitoring
Monitor keyring access (macOS example):
# View keychain access logs
log show --predicate 'subsystem == "com.apple.securityd"' --last 1h
Network Security
TLS/SSL Verification
Always verify SSL certificates in production:
[profiles.production]
deployment_type = "enterprise"
url = "https://cluster.example.com:9443"
username = "admin@example.com"
password = "keyring:production-password"
insecure = false # Never true in production
IP Whitelisting
Configure API access from specific IPs only:
- In Redis Cloud console, set IP whitelist
- In Redis Enterprise, configure firewall rules
- Document allowed IPs in team runbook
Incident Response
Compromised Credentials
If credentials are compromised:
- Immediately revoke compromised credentials in console
- Generate new credentials
- Update all systems using the credentials:
# Update all profiles using compromised credentials for profile in $(redisctl profile list | grep production); do redisctl profile set "$profile" \ --api-key "new-key" \ --api-secret "new-secret" \ --use-keyring done - Audit access logs for unauthorized usage
- Document incident and update security procedures
Security Checklist
- [ ] Using OS keyring for production credentials
- [ ] Config files have restricted permissions (600)
- [ ] Credentials not committed to version control
- [ ] Environment variables used in CI/CD
- [ ] Regular credential rotation scheduled
- [ ] Audit logging enabled
- [ ] SSL verification enabled
- [ ] IP whitelisting configured
- [ ] Incident response plan documented
- [ ] Team trained on security procedures
Additional Resources
Troubleshooting
Solutions for common issues when using redisctl.
Installation Issues
Binary Not Found
Problem: command not found: redisctl
Solutions:
# Check if binary is in PATH
which redisctl
# Add to PATH (Linux/macOS)
export PATH="$PATH:/path/to/redisctl"
echo 'export PATH="$PATH:/path/to/redisctl"' >> ~/.bashrc
# Make executable
chmod +x /path/to/redisctl
# Verify installation
redisctl --version
Permission Denied
Problem: permission denied: redisctl
Solutions:
# Make executable
chmod +x redisctl
# If installed system-wide
sudo chmod +x /usr/local/bin/redisctl
# Check ownership
ls -la $(which redisctl)
SSL Certificate Errors
Problem: Certificate verification failed
Solutions:
# For self-signed certificates (Enterprise)
export REDIS_ENTERPRISE_INSECURE=true
# Update CA certificates (Linux)
sudo update-ca-certificates
# macOS
brew install ca-certificates
Authentication Issues
Credential Priority Hierarchy
IMPORTANT: redisctl uses credentials in this priority order (highest to lowest):
- Environment Variables (highest priority)
- Profile Configuration
- CLI Flags (lowest priority)
This means environment variables will override your profile settings!
Common Issue: Profile credentials not being used
Diagnosis:
# Check for environment variables
env | grep REDIS
# Run with verbose logging to see which credentials are used
redisctl -vv cloud subscription list 2>&1 | grep -i "using.*credentials"
Solution:
# Unset environment variables to use profile
unset REDIS_CLOUD_API_KEY
unset REDIS_CLOUD_SECRET_KEY
unset REDIS_CLOUD_API_URL
# For Enterprise
unset REDIS_ENTERPRISE_URL
unset REDIS_ENTERPRISE_USER
unset REDIS_ENTERPRISE_PASSWORD
# Verify profile is now used
redisctl -vv cloud subscription list
Invalid Credentials
Problem: 401 Unauthorized or Authentication failed
Diagnosis:
# Check environment variables first
env | grep REDIS
# Verify profile configuration
redisctl profile show prod
# Use verbose logging to see which credentials are being used
redisctl -vv cloud account get 2>&1 | head -20
Solutions:
# Re-set credentials
redisctl profile set prod \
--deployment cloud \
--api-key "correct-key" \
--api-secret "correct-secret"
# For Enterprise with special characters in password
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin@domain.com" \
--password 'p@$$w0rd!' # Use single quotes
Profile Not Found
Problem: Profile 'name' not found
Solutions:
# List available profiles
redisctl profile list
# Check config file location
redisctl profile path
# Create missing profile
redisctl profile set missing-profile \
--deployment cloud \
--api-key "$API_KEY" \
--api-secret "$SECRET"
# Set default profile
redisctl profile default prod
Environment Variable Issues
Problem: Environment variables not being read
Solutions:
# Export variables properly
export REDIS_CLOUD_API_KEY="key"
export REDIS_CLOUD_API_SECRET="secret"
# Check if set
echo $REDIS_CLOUD_API_KEY
# Use in same shell or source
source ~/.bashrc
# Debug with trace logging
RUST_LOG=trace redisctl cloud subscription list 2>&1 | grep -i env
Connection Issues
Network Timeout
Problem: Connection timeout or Failed to connect
Diagnosis:
# Test connectivity
curl -I https://api.redislabs.com/v1/
ping api.redislabs.com
# For Enterprise
curl -k https://your-cluster:9443/v1/bootstrap
# Check DNS
nslookup api.redislabs.com
Solutions:
# Increase timeout (if supported in future versions)
export REDISCTL_TIMEOUT=60
# Check proxy settings
export HTTP_PROXY=http://proxy:8080
export HTTPS_PROXY=http://proxy:8080
# Bypass proxy for local
export NO_PROXY=localhost,127.0.0.1
# Test with curl first
curl -x $HTTPS_PROXY https://api.redislabs.com/v1/
SSL/TLS Errors
Problem: SSL certificate problem or Certificate verify failed
Solutions for Enterprise:
# Allow self-signed certificates
export REDIS_ENTERPRISE_INSECURE=true
# Or in profile
redisctl profile set enterprise \
--deployment enterprise \
--url "https://cluster:9443" \
--username "admin" \
--password "pass" \
--insecure
# Import certificate
# Linux
sudo cp cluster-cert.pem /usr/local/share/ca-certificates/
sudo update-ca-certificates
# macOS
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain cluster-cert.pem
Port Blocked
Problem: Connection refused
Solutions:
# Check if port is open
nc -zv api.redislabs.com 443
nc -zv your-cluster 9443
# Check firewall rules
# Linux
sudo iptables -L -n | grep 9443
# macOS
sudo pfctl -s rules
# Windows
netsh advfirewall firewall show rule name=all
API Errors
Rate Limiting
Problem: 429 Too Many Requests
Solutions:
# Add delay between requests
for sub in $(cat subscriptions.txt); do
redisctl cloud subscription get $sub
sleep 2 # Wait 2 seconds
done
# Implement exponential backoff
retry_with_backoff() {
local max_attempts=5
local attempt=0
local delay=1
while [ $attempt -lt $max_attempts ]; do
if "$@"; then
return 0
fi
echo "Rate limited, waiting ${delay}s..."
sleep $delay
attempt=$((attempt + 1))
delay=$((delay * 2))
done
return 1
}
retry_with_backoff redisctl cloud subscription list
Resource Not Found
Problem: 404 Not Found
Diagnosis:
# Verify resource exists
redisctl cloud subscription list
redisctl cloud database list --subscription 123456
# Check ID format
# Cloud: subscription_id:database_id
# Enterprise: numeric
Solutions:
# Use correct ID format
# Cloud
redisctl cloud database get \
--subscription 123456 \
--database-id 789
# Enterprise
redisctl enterprise database get 1
# List to find correct ID
redisctl cloud subscription list -q "[].{id: id, name: name}"
Invalid Request
Problem: 400 Bad Request
Solutions:
# Validate JSON
cat payload.json | jq .
# Check required fields
# Example: database creation requires name
cat > database.json <<EOF
{
"name": "my-database", # Required
"memoryLimitInGb": 1 # Required
}
EOF
# Use schema validation (if available)
redisctl validate database.json
# Test with minimal payload first
echo '{"name": "test", "memoryLimitInGb": 1}' | \
redisctl api cloud post /subscriptions/123/databases --data @-
Command Issues
Command Not Recognized
Problem: Unknown command
Solutions:
# Check available commands
redisctl --help
redisctl cloud --help
redisctl enterprise --help
# Update to latest version
# Download latest from GitHub releases
# Check command syntax
redisctl cloud database list --subscription 123 # Correct
redisctl cloud database list 123 # Incorrect
Missing Required Arguments
Problem: Missing required argument
Solutions:
# Check command requirements
redisctl cloud database get --help
# Provide all required arguments
redisctl cloud database get \
--subscription 123456 \ # Required
--database-id 789 # Required
# Use environment variables for defaults
export REDIS_SUBSCRIPTION_ID=123456
Output Parsing Errors
Problem: JMESPath query errors or unexpected output
Solutions:
# Test query separately
redisctl cloud subscription list
redisctl cloud subscription list -q "[].name"
# Escape special characters
redisctl cloud database list -q "[?name=='my-db']" # Correct
redisctl cloud database list -q '[?name==`my-db`]' # Also correct
# Debug output format
redisctl cloud subscription list -o json > output.json
cat output.json | jq '.[] | keys'
Async Operation Issues
Operation Timeout
Problem: Operation timeout when using --wait
Solutions:
# Increase timeout
redisctl cloud database create \
--subscription 123 \
--data @db.json \
--wait \
--wait-timeout 1200 # 20 minutes
# Check operation status manually
TASK_ID=$(redisctl cloud database create \
--subscription 123 \
--data @db.json \
-q "taskId")
# Poll manually
while true; do
STATUS=$(redisctl api cloud get /tasks/$TASK_ID -q "status")
echo "Status: $STATUS"
if [ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ]; then
break
fi
sleep 30
done
Task Not Found
Problem: Cannot find task ID for async operation
Solutions:
# Check if operation returns task ID
redisctl cloud database create \
--subscription 123 \
--data @db.json \
-o json | jq .
# Some operations might not be async
# Check API documentation
# List recent tasks
redisctl api cloud get /tasks --query-params "limit=10"
Configuration Issues
Config File Not Found
Problem: Configuration file not loading
Solutions:
# Check file location
redisctl profile path
# Create config directory
mkdir -p ~/.config/redisctl
# Initialize config
redisctl profile set default \
--deployment cloud \
--api-key "key" \
--api-secret "secret"
# Check permissions
chmod 600 ~/.config/redisctl/config.toml
Environment Variable Expansion
Problem: Variables in config not expanding
Solutions:
# config.toml
[profiles.prod]
deployment_type = "cloud"
api_key = "${REDIS_API_KEY}" # Will expand
api_secret = "$REDIS_SECRET" # Won't expand - needs braces
# With defaults
api_url = "${REDIS_API_URL:-https://api.redislabs.com/v1}"
Performance Issues
Slow Response Times
Solutions:
# Enable caching (if implemented)
export REDISCTL_CACHE=true
# Reduce response size
redisctl cloud subscription list --query-params "fields=id,name"
# Use specific queries
redisctl cloud database list -q "[0:5]" # First 5 only
# Parallel processing
for id in $(cat database-ids.txt); do
redisctl cloud database get --subscription 123 --database-id $id &
done
wait
Large Output Handling
Solutions:
# Paginate results
LIMIT=50
OFFSET=0
while true; do
RESULTS=$(redisctl api cloud get /subscriptions \
--query-params "limit=$LIMIT&offset=$OFFSET")
# Process results
OFFSET=$((OFFSET + LIMIT))
done
# Stream to file
redisctl cloud database list --subscription 123 > databases.json
# Process with streaming tools
for db_name in $(redisctl cloud database list --subscription 123 -q '[].name' --raw); do
echo "Processing: $db_name"
done
Debug Techniques
Enable Debug Logging
# Basic debug
export RUST_LOG=debug
redisctl cloud subscription list
# Trace everything
export RUST_LOG=trace
# Specific modules
export RUST_LOG=redisctl=debug,redis_cloud=trace
# Save debug output
RUST_LOG=trace redisctl cloud subscription list 2> debug.log
Inspect HTTP Traffic
# Use proxy for inspection
export HTTP_PROXY=http://localhost:8888
# Run Charles Proxy or similar
# Or use trace logging
RUST_LOG=trace redisctl api cloud get /subscriptions 2>&1 | grep -i "http"
Test with Curl
# Replicate redisctl request with curl
# Cloud
curl -H "x-api-key: $API_KEY" \
-H "x-api-secret-key: $SECRET" \
https://api.redislabs.com/v1/subscriptions
# Enterprise
curl -k -u "admin:password" \
https://cluster:9443/v1/cluster
Getting Help
Resources
-
Check documentation
redisctl --help redisctl <command> --help -
View debug information
redisctl --version RUST_LOG=debug redisctl profile list -
Report issues
- GitHub Issues: https://github.com/redis-developer/redisctl/issues
- Include: version, command, error message, debug output
-
Community support
- Redis Discord
- Stack Overflow with tag
redisctl
Information to Provide
When reporting issues, include:
# Version
redisctl --version
# Command that failed
redisctl cloud database list --subscription 123
# Error message
# Full error output
# Debug output
RUST_LOG=debug redisctl cloud database list --subscription 123 2>&1
# Environment
uname -a
echo $SHELL
# Config (sanitized)
redisctl profile show prod | sed 's/api_key=.*/api_key=REDACTED/'
Appendix: rladmin vs redisctl
Overview
rladmin is Redis Enterprise's built-in CLI for node-local cluster management.
redisctl provides remote cluster management via REST API.
They're complementary tools - use both!
Quick Comparison
| Feature | rladmin | redisctl enterprise |
|---|---|---|
| Deployment | Pre-installed on nodes | Single binary, any platform |
| Access | Local (SSH required) | Remote (REST API) |
| Platforms | Linux only (on nodes) | macOS, Windows, Linux |
| Output | Text only | JSON, YAML, Table |
| Scripting | Text parsing required | Native JSON |
| Multi-cluster | One at a time | Profile system |
Where rladmin Excels
Local node operations - Direct node access
No network dependency - Works when API is down
Low-level operations - Node-specific commands
Already installed - No extra setup
Where redisctl Excels
Remote management - No SSH required
Structured output - JSON/YAML for automation
Cross-platform - Works on developer laptops
Multi-cluster - Profile system
Rich features - JMESPath, workflows, support packages
Better UX - Progress indicators, --wait for async ops
Example Comparison
Get Database Info
rladmin approach:
# SSH to node
ssh admin@cluster-node
# Get info (text output)
rladmin info bdb 1 | grep memory | awk '{print $2}'
redisctl approach:
# From your laptop (no SSH)
redisctl enterprise database get 1 -q 'memory_size'
Support Package
rladmin approach:
# 1. SSH to node
ssh admin@cluster-node
# 2. Generate
rladmin cluster debug_info
# 3. SCP to local
scp admin@node:/tmp/debug*.tar.gz ./
# 4. Manually upload via web UI
# (10+ minutes)
redisctl approach:
# One command from laptop
redisctl enterprise support-package cluster --optimize --upload
# (30 seconds)
Use Cases
Use rladmin when:
- SSH'd into a cluster node
- Need low-level node operations
- API is unavailable
- Debugging directly on nodes
Use redisctl when:
- Remote management from laptop
- CI/CD automation
- Managing multiple clusters
- Need structured output
- Building custom tools
Best Practice
Use both:
- redisctl for day-to-day ops, automation, remote management
- rladmin for emergency troubleshooting, node-level operations
Full Comparison
See RLADMIN_COMPARISON.md for detailed feature matrix with 30+ comparisons.
Back: 9. Next Steps
Return to Start
Architecture
redisctl is built as a collection of reusable Rust libraries with a thin CLI layer on top.
Workspace Structure
redisctl/
├── crates/
│ ├── redisctl-config/ # Profile and credential management
│ ├── redis-cloud/ # Cloud API client library
│ ├── redis-enterprise/ # Enterprise API client library
│ └── redisctl/ # CLI application
└── docs/ # mdBook documentation
Library Layers
redisctl-config
Profile and credential management:
- Configuration file parsing
- Secure credential storage (OS keyring)
- Environment variable handling
#![allow(unused)] fn main() { use redisctl_config::{Config, Profile}; let config = Config::load()?; let profile = config.get_profile("production")?; }
redis-cloud
Redis Cloud API client:
- 21 handler modules
- 95%+ API coverage
- Async/await support
#![allow(unused)] fn main() { use redis_cloud::CloudClient; let client = CloudClient::new(api_key, api_secret)?; let subscriptions = client.subscriptions().list().await?; }
redis-enterprise
Redis Enterprise API client:
- 29 handler modules
- 100% API coverage
- Support for binary responses (debug info, support packages)
#![allow(unused)] fn main() { use redis_enterprise::EnterpriseClient; let client = EnterpriseClient::new(url, username, password)?; let cluster = client.cluster().get().await?; }
redisctl
CLI application:
- Command parsing (clap)
- Output formatting
- Workflow orchestration
Design Principles
Library-First
The API clients are independent libraries that can be used by other tools (Terraform providers, monitoring dashboards, etc.).
Type-Safe
All API responses are deserialized into Rust structs, catching errors at compile time.
Handler Pattern
Each API resource has a handler module with methods for CRUD operations:
#![allow(unused)] fn main() { // Handler pattern client.databases().list().await?; client.databases().get(id).await?; client.databases().create(config).await?; }
Async Operations
Built on Tokio for async I/O. Long-running operations return task IDs with optional polling.
Error Handling
- Libraries: Use
thiserrorfor typed errors - CLI: Use
anyhowfor context-rich messages
#![allow(unused)] fn main() { // Library error #[derive(Error, Debug)] pub enum CloudError { #[error("API request failed: {0}")] Request(#[from] reqwest::Error), #[error("Authentication failed")] Auth, } // CLI error handling let result = client.databases().get(id).await .context("Failed to fetch database")?; }
Output System
Three-tier output formatting:
- JSON (default) - Machine-readable
- YAML - Human-readable structured
- Table - Human-readable tabular
JMESPath queries filter output before formatting.
Future Libraries
Planned extractions:
redisctl-workflows- Reusable workflow orchestrationredisctl-output- Consistent output formatting
Using the Libraries
The redis-cloud and redis-enterprise crates can be used independently in your Rust projects.
Installation
[dependencies]
redis-cloud = "0.2"
redis-enterprise = "0.2"
Basic Usage
Redis Cloud Client
use redis_cloud::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "your-api-key", "your-api-secret", )?; // Get account info (root endpoint) let account = client.get_raw("/").await?; println!("{}", account); Ok(()) }
Redis Enterprise Client
use redis_enterprise::Client; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let client = Client::new( "https://cluster:9443", "admin@cluster.local", "password", true, // insecure )?; // Get cluster info let cluster = client.get_raw("/v1/cluster").await?; println!("{}", cluster); Ok(()) }
More documentation coming soon.
Contributing
Guidelines for contributing to redisctl.
Development Setup
# Clone repository
git clone https://github.com/redis-developer/redisctl.git
cd redisctl
# Build
cargo build
# Run tests
cargo test --workspace --all-features
# Run lints
cargo fmt --all -- --check
cargo clippy --all-targets --all-features -- -D warnings
Branch Naming
feat/- New featuresfix/- Bug fixesdocs/- Documentationrefactor/- Code refactoringtest/- Test improvements
Commit Messages
Use conventional commits:
feat: add database streaming support
fix: handle empty response in cluster stats
docs: update installation instructions
refactor: extract async operation handling
test: add wiremock tests for VPC peering
Pull Requests
- Create feature branch from
main - Make changes with tests
- Run
cargo fmtandcargo clippy - Push and create PR
- Wait for CI to pass
- Request review
Testing
Unit Tests
cargo test --lib --all-features
Integration Tests
cargo test --test '*' --all-features
With Docker Environment
docker compose up -d
cargo test --package redis-enterprise
Code Style
- Use
anyhowfor CLI errors,thiserrorfor library errors - All public APIs must have doc comments
- Follow Rust 2024 edition idioms
- No emojis in code, commits, or docs
Adding Commands
- Add CLI enum variant in
crates/redisctl/src/cli/ - Implement handler in
crates/redisctl/src/commands/ - Add client method in library crate if needed
- Add tests
- Update documentation
Questions?
Open an issue at github.com/redis-developer/redisctl/issues