Deployment
This guide covers deploying Korvet in production environments.
Docker Deployment
Kubernetes Deployment
For single-broker or simple deployments, use a standard Deployment. For multi-broker clusters with proper broker discovery, see Multi-Broker Deployment.
Simple Deployment (Single Broker)
apiVersion: apps/v1
kind: Deployment
metadata:
name: korvet
spec:
replicas: 1 # Single broker
selector:
matchLabels:
app: korvet
template:
metadata:
labels:
app: korvet
spec:
containers:
- name: korvet
image: redisfield/korvet:latest
ports:
- containerPort: 9092
env:
- name: KORVET_REDIS_URI
value: redis://redis-service:6379
- name: KORVET_SERVER_HOST
value: 0.0.0.0
- name: KORVET_SERVER_PORT
value: "9092"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "2000m"
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
Service Manifest
apiVersion: v1
kind: Service
metadata:
name: korvet-service
spec:
selector:
app: korvet
ports:
- protocol: TCP
port: 9092
targetPort: 9092
type: LoadBalancer
| For multi-broker deployments with high availability, use a StatefulSet instead. See Kubernetes StatefulSet. |
Multi-Broker Deployment
Korvet supports multi-broker deployments where multiple Korvet instances share the same Redis backend. This provides high availability and load distribution while maintaining full data consistency.
Architecture Overview
In a multi-broker deployment:
-
All brokers connect to the same Redis instance or cluster
-
Brokers automatically discover each other via the Broker Registry stored in Redis
-
Kafka clients can connect to any broker and receive metadata about all brokers in the cluster
-
Messages produced to one broker are immediately available from any other broker
┌─────────────────┐
│ Load Balancer │
│ (TCP/9092) │
└────────┬────────┘
│
┌────────────────────┼────────────────────┐
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Korvet 0 │ │ Korvet 1 │ │ Korvet 2 │
│ broker-id=0 │ │ broker-id=1 │ │ broker-id=2 │
│ port=9092 │ │ port=9092 │ │ port=9092 │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
│ │ │
└───────────────────┼───────────────────┘
│
▼
┌───────────────────────┐
│ Redis │
│ (Streams + Registry) │
└───────────────────────┘
Broker Configuration
Each broker requires a unique broker-id and proper network configuration:
korvet:
server:
broker-id: 0 # Unique ID for this broker (0, 1, 2, etc.)
host: 0.0.0.0 # Listen on all interfaces
port: 9092 # Kafka protocol port
advertised-host: korvet-0.korvet.default.svc.cluster.local # Hostname clients use
advertised-port: 9092 # Port clients use
keyspace: korvet # Must be the same across all brokers
redis:
uri: redis://redis:6379 # Same Redis for all brokers
All brokers in a cluster must use the same keyspace and connect to the same Redis instance.
|
Broker Discovery
Korvet uses a Broker Registry stored in Redis for automatic broker discovery:
-
Each broker registers itself on startup with its ID, host, and port
-
Brokers send periodic heartbeats (every 10 seconds by default)
-
Stale broker entries are automatically removed after 30 seconds without heartbeat
-
Kafka clients receive all registered brokers in metadata responses
Redis keys used by the registry:
{keyspace}:brokers # Set of all broker IDs
{keyspace}:broker:{id} # Hash with broker host/port (TTL: 30s)
Docker Compose Example
services:
redis:
image: redis:latest
ports:
- "6379:6379"
korvet-0:
image: redisfield/korvet:latest
ports:
- "9092:9092"
environment:
KORVET_SERVER_BROKER_ID: 0
KORVET_SERVER_HOST: 0.0.0.0
KORVET_SERVER_ADVERTISED_HOST: localhost
KORVET_SERVER_ADVERTISED_PORT: 9092
KORVET_REDIS_URI: redis://redis:6379
korvet-1:
image: redisfield/korvet:latest
ports:
- "9093:9092"
environment:
KORVET_SERVER_BROKER_ID: 1
KORVET_SERVER_HOST: 0.0.0.0
KORVET_SERVER_ADVERTISED_HOST: localhost
KORVET_SERVER_ADVERTISED_PORT: 9093
KORVET_REDIS_URI: redis://redis:6379
korvet-2:
image: redisfield/korvet:latest
ports:
- "9094:9092"
environment:
KORVET_SERVER_BROKER_ID: 2
KORVET_SERVER_HOST: 0.0.0.0
KORVET_SERVER_ADVERTISED_HOST: localhost
KORVET_SERVER_ADVERTISED_PORT: 9094
KORVET_REDIS_URI: redis://redis:6379
Clients can connect using multiple bootstrap servers:
kafka-console-producer --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --topic test
Kubernetes StatefulSet
For Kubernetes, use a StatefulSet to ensure each broker gets a unique, stable identity. The StatefulSet provides:
-
Stable network identity: Each pod gets a predictable DNS name (
korvet-0,korvet-1, etc.) -
Ordered deployment: Pods are created sequentially, ensuring broker registration order
-
Stable storage: PersistentVolumeClaims are retained across pod restarts (if needed)
Complete Kubernetes Manifests
# ConfigMap for shared configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: korvet-common
data:
KORVET_SERVER_HOST: "0.0.0.0"
KORVET_SERVER_PORT: "9092"
KORVET_SERVER_KEYSPACE: "korvet"
KORVET_SERVER_REBALANCE_DELAY: "5s" # Allow time for consumers to join in K8s
---
# Headless service for StatefulSet DNS
apiVersion: v1
kind: Service
metadata:
name: korvet
labels:
app: korvet
spec:
clusterIP: None
selector:
app: korvet
ports:
- port: 9092
name: kafka
- port: 8080
name: actuator
---
# LoadBalancer service for external access
apiVersion: v1
kind: Service
metadata:
name: korvet-lb
spec:
type: LoadBalancer
selector:
app: korvet
ports:
- port: 9092
targetPort: 9092
name: kafka
---
# StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: korvet
spec:
serviceName: korvet
replicas: 3
podManagementPolicy: Parallel # Start all pods simultaneously
selector:
matchLabels:
app: korvet
template:
metadata:
labels:
app: korvet
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
terminationGracePeriodSeconds: 30
initContainers:
# Extract broker ID from pod name (korvet-0 -> 0, korvet-1 -> 1, etc.)
- name: init-broker-id
image: busybox:1.36
command:
- sh
- -c
- |
ORDINAL=${HOSTNAME##*-}
echo "KORVET_SERVER_BROKER_ID=${ORDINAL}" > /config/broker.env
echo "KORVET_SERVER_ADVERTISED_HOST=${HOSTNAME}.korvet.${NAMESPACE}.svc.cluster.local" >> /config/broker.env
echo "Broker ID: ${ORDINAL}, Advertised Host: ${HOSTNAME}.korvet.${NAMESPACE}.svc.cluster.local"
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: config-volume
mountPath: /config
containers:
- name: korvet
image: redisfield/korvet:latest
command:
- sh
- -c
- |
# Source the broker-specific config
export $(cat /config/broker.env | xargs)
# Start the application
exec java -jar /app/korvet.jar
ports:
- containerPort: 9092
name: kafka
- containerPort: 8080
name: actuator
envFrom:
- configMapRef:
name: korvet-common
- secretRef:
name: korvet-redis-credentials
optional: true
env:
- name: KORVET_REDIS_URI
value: redis://redis:6379
- name: KORVET_SERVER_ADVERTISED_PORT
value: "9092"
# JVM tuning for containers
- name: JAVA_TOOL_OPTIONS
value: "-XX:MaxRAMPercentage=75.0 -XX:+UseG1GC -XX:MaxDirectMemorySize=256m"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "2000m"
volumeMounts:
- name: config-volume
mountPath: /config
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 3
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 30
volumes:
- name: config-volume
emptyDir: {}
Redis Credentials Secret
If Redis requires authentication, create a secret:
kubectl create secret generic korvet-redis-credentials \
--from-literal=KORVET_REDIS_PASSWORD=your-password
Scaling the Cluster
Scale up or down with:
# Scale to 5 brokers
kubectl scale statefulset korvet --replicas=5
# Scale down to 3 brokers
kubectl scale statefulset korvet --replicas=3
When scaling down, brokers are removed in reverse order (highest ID first). The broker registry automatically removes stale entries after their TTL expires (30 seconds).
Pod Disruption Budget
For high availability, configure a PodDisruptionBudget:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: korvet-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: korvet
External Access
For clients outside the Kubernetes cluster, you have several options:
Option 1: LoadBalancer Service (shown above)
Clients connect to the LoadBalancer IP. All traffic is distributed across brokers.
Option 2: NodePort Service
apiVersion: v1
kind: Service
metadata:
name: korvet-nodeport
spec:
type: NodePort
selector:
app: korvet
ports:
- port: 9092
targetPort: 9092
nodePort: 30092
Option 3: Ingress with TCP support (e.g., NGINX Ingress Controller)
Configure TCP services in the ingress controller’s ConfigMap.
Monitoring in Kubernetes
Korvet exposes Prometheus metrics at /actuator/prometheus. With the annotations in the StatefulSet, Prometheus will automatically scrape metrics.
For Grafana dashboards, query metrics like:
-
korvet_produce_requests_total- Total produce requests -
korvet_fetch_requests_total- Total fetch requests -
korvet_active_connections- Current active connections
Consumer Group Coordination
In multi-broker deployments, consumer group coordination is handled specially:
-
The group coordinator is selected using consistent hashing based on the group ID
-
Clients are directed to the correct coordinator via the
FindCoordinatorresponse -
All consumer group state is stored in Redis, so any broker can serve offset commits/fetches
Best Practices
- Broker IDs
-
Use sequential IDs starting from 0 (0, 1, 2, …). Each broker must have a unique ID.
- Advertised Listeners
-
Always configure
advertised-hostandadvertised-portto the address clients should use to connect. This is especially important in Docker/Kubernetes where internal and external addresses differ. - Load Balancing
-
Use a TCP load balancer (not HTTP) in front of your brokers. Any load balancing strategy works since all brokers serve the same data.
- Bootstrap Servers
-
Configure Kafka clients with multiple bootstrap servers for fault tolerance:
bootstrap.servers=korvet-0:9092,korvet-1:9092,korvet-2:9092 - Redis High Availability
-
For production, use Redis Cluster or Redis Enterprise to ensure the storage layer is also highly available.
High Availability
For production deployments:
-
Multiple instances: Run 3+ Korvet instances for redundancy
-
Redis cluster: Use Redis Cluster or Redis Enterprise for HA
-
Health checks: Configure liveness and readiness probes
-
Graceful shutdown: Allow time for in-flight requests to complete
Scaling
Korvet can be scaled horizontally:
-
Stateless: Each instance shares state via Redis
-
Load balancing: Use any TCP load balancer
-
Add brokers: Simply start new instances with unique broker IDs