Tinytown
Simple multi-agent orchestration using Redis β All the power, none of the complexity.
Welcome to Tinytown! ποΈ
Tinytown is a minimal, blazing-fast multi-agent orchestration system. It lets you coordinate AI coding agents (Claude, Augment, Codex, and more) using Redis for message passing.
Why Tinytown?
If youβve tried to set up complex orchestration systems like Gastown and found yourself drowning in configuration files, agent taxonomies, and recovery mechanisms β Tinytown is for you.
| What you want | Complex systems | Tinytown |
|---|---|---|
| Get started | Hours of setup | 30 seconds |
| Understand it | 50+ concepts | 5 types |
| Configure it | 10+ config files | 1 TOML file |
| Debug it | Navigate 300K+ lines | Read 1,400 lines |
Core Philosophy
Simplicity is a feature, not a limitation.
Tinytown does less, so you can do more. We include only what you need:
β
Spawn and manage agents
β
Assign tasks and track state
β
Keep unassigned work in a shared backlog
β
Pass messages between agents
β
Persist work in Redis
And we deliberately leave out:
β Complex workflow DAGs
β Distributed transactions
β Recovery daemons
β Multi-layer databases
When you need those features, youβll know β and you can add them yourself in a few lines of code, or upgrade to a more complex system.
Quick Example
# Initialize a town
tt init --name my-project
# Spawn agents (uses default model, or specify with --model)
tt spawn frontend
tt spawn backend
tt spawn reviewer
# Assign tasks
tt assign frontend "Build the login page"
tt assign backend "Create the auth API"
tt assign reviewer "Review PRs when ready"
# Or park unassigned tasks for role-based claiming
tt backlog add "Harden auth error handling" --tags backend,security
tt backlog list
# Check status
tt status
# Or let the conductor orchestrate for you!
tt conductor
# "Build a user authentication system"
# Conductor spawns agents, breaks down tasks, and coordinates...
Thatβs it. Your agents are now coordinating via Redis.
Plan Work with tasks.toml
For complex workflows, define tasks in a file:
tt plan --init # Creates tasks.toml
Edit tasks.toml to define your pipeline:
[[tasks]]
id = "auth-api"
description = "Build the auth API"
agent = "backend"
status = "pending"
[[tasks]]
id = "auth-tests"
description = "Write auth tests"
agent = "tester"
parent = "auth-api"
status = "pending"
Then sync to Redis and let agents work:
tt sync push
tt conductor
See tt plan for the full task DSL.
Whatβs Next?
- Installation β Get Tinytown running in 30 seconds
- Quick Start β Your first multi-agent workflow
- Core Concepts β Understand Towns, Agents, Tasks, Messages, and Channels
- Townhall REST API β HTTP control plane for automation
- Townhall MCP Server β MCP tools/resources/prompts for LLM clients
- Coming from Gastown? β Migration guide for Gastown users
Named After
Tiny Town, Colorado β a miniature village with big charm, just like this project! ποΈ
Installation
Getting Tinytown running takes about 30 seconds.
Step 1: Install Tinytown
From crates.io (Recommended) β
cargo install tinytown
From Source
git clone https://github.com/redis-field-engineering/tinytown.git
cd tinytown
cargo install --path .
Need Rust? Install it via rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Step 2: Install Redis 8.0+
Tinytown requires Redis 8.0 or later.
Option 1: Bootstrap (Recommended) β
Let Tinytown download and build Redis for you using an AI agent:
tt bootstrap
export PATH="$HOME/.tt/bin:$PATH"
This gets you the latest Redis compiled and optimized for your machine. Add the export to ~/.zshrc or ~/.bashrc for persistence.
Option 2: Package Manager
macOS:
brew install redis
Ubuntu/Debian:
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt-get update
sudo apt-get install redis
Option 3: From Source (Manual)
curl -LO https://github.com/redis/redis/archive/refs/tags/8.0.2.tar.gz
tar xzf 8.0.2.tar.gz
cd redis-8.0.2 && make && sudo make install
For more options, see the Redis downloads page.
Step 3: Verify Installation
# Check tt is installed
tt --help
# Should output:
# Tinytown - Simple multi-agent orchestration using Redis
# ...
# Verify Redis version
redis-server --version
# Should show v=8.x.x or higher
Whatβs Next?
Youβre ready to go! Head to the Quick Start to create your first town.
Quick Start
Letβs get a multi-agent workflow running in under 5 minutes.
First time? Make sure youβve installed Tinytown first.
Step 1: Initialize a Town
A Town is your orchestration workspace. It manages Redis, agents, and message passing.
# Create a project directory
mkdir my-project && cd my-project
# Initialize the town
tt init --name my-project
Youβll see:
β¨ Initialized town 'my-project' at .
π‘ Redis running with Unix socket for fast message passing
π Run 'tt spawn <name>' to create agents
This creates:
tinytown.tomlβ Configuration fileagents/β Agent working directorieslogs/β Activity logstasks/β Task storageredis.sockβ Unix socket for fast Redis communication
Step 2: Spawn an Agent
Agents are workers that execute tasks. Spawn one:
tt spawn worker-1
Output:
π€ Spawned agent 'worker-1' using model 'claude'
ID: 550e8400-e29b-41d4-a716-446655440000
The agent uses default_model from your config. Spawn more:
tt spawn worker-2
tt spawn reviewer
Or override with --model:
tt spawn specialist --model codex
Step 3: Assign Tasks
Give your agents something to do:
tt assign worker-1 "Implement the user login API endpoint"
tt assign worker-2 "Write tests for the login API"
tt assign reviewer "Review the login implementation when ready"
Step 4: Check Status
See whatβs happening in your town:
tt status
Output:
ποΈ Town: my-project
π Root: /path/to/my-project
π‘ Redis: unix:///path/to/my-project/redis.sock
π€ Agents: 3
worker-1 (Working) - 0 messages pending
worker-2 (Idle) - 1 messages pending
reviewer (Idle) - 1 messages pending
List all agents:
tt list
Step 5: Keep It Running
To keep a town connection open during development:
tt start
Press Ctrl+C to stop gracefully.
What Just Happened?
You created a Town with three Agents. Each agent received a Task via a Message sent through a Redis Channel.
Thatβs the entire Tinytown model:
Town β spawns β Agents
β
Channel (Redis)
β
Messages β contain β Tasks
Bonus: Planning with tasks.toml
For larger projects, define tasks in a file instead of CLI commands:
# Initialize a task plan
tt plan --init
Edit tasks.toml:
[[tasks]]
id = "login-api"
description = "Implement the user login API endpoint"
agent = "worker-1"
status = "pending"
[[tasks]]
id = "login-tests"
description = "Write tests for the login API"
agent = "worker-2"
status = "pending"
parent = "login-api"
Sync to Redis:
tt sync push
Now your tasks are version-controlled and can be reviewed in PRs. See tt plan for more.
Next Steps
- Your First Town β Deeper dive into town setup
- Core Concepts β Understand the 5 core types
- Single Agent Workflow β Complete tutorial
Your First Town
Letβs build a real workflow: coordinating agents to implement and review a feature.
The Scenario
You want to:
- Have one agent implement a feature
- Have another agent write tests
- Have a third agent review both
Project Setup
# Create and enter your project
mkdir feature-builder && cd feature-builder
# Initialize git (optional but recommended)
git init
# Initialize tinytown
tt init --name feature-builder
Understanding the Config
Open tinytown.toml:
name = "feature-builder"
default_cli = "claude"
max_agents = 10
[redis]
use_socket = true
socket_path = "redis.sock"
Key settings:
use_socket = trueβ Uses Unix socket for ~10x faster communication than TCPdefault_cliβ Agent CLI when--modelisnβt specifiedmax_agentsβ Prevents accidentally spawning too many
Create Your Team
# The implementer
tt spawn dev --model claude
# The tester
tt spawn tester --model auggie
# The reviewer
tt spawn reviewer --model codex
Check your team:
tt list
Agents:
dev (550e8400-...) - Starting
tester (6ba7b810-...) - Starting
reviewer (6ba7b811-...) - Starting
Assign the Work
# Implementation task
tt assign dev "Create a REST API endpoint POST /users that:
- Accepts {email, password, name}
- Validates email format
- Hashes password with bcrypt
- Returns {id, email, name, created_at}"
# Testing task
tt assign tester "Write integration tests for POST /users:
- Test successful creation
- Test duplicate email rejection
- Test invalid email format
- Test missing required fields"
# Review task
tt assign reviewer "Review the implementation and tests when ready:
- Check for security issues
- Verify error handling
- Ensure tests cover edge cases"
Monitor Progress
# See overall status
tt status
# Watch for changes (re-run periodically)
watch -n 5 tt status
What Happens Behind the Scenes
- Task Creation: Each
tt assigncreates aTaskwith a unique ID - Message Sending: A
Messageof typeTaskAssignis sent to the agentβs inbox - Redis Queue: Messages are stored in town-isolated Redis lists (
tt:<town>:inbox:<agent-id>) - Agent Pickup: Agents receive messages via
BLPOP(blocking pop) - State Tracking: Agent and task states are stored in Redis
Connecting Real Agents
Tinytown creates the infrastructure, but you need to connect actual AI agents. The spawn command prepares the configuration; you then run the agent:
# Example: Run Claude CLI pointing at your town
cd agents/dev
claude --print # Uses the model command from config
Or with Augment:
cd agents/tester
augment # Uses the model command from config
Cleanup
When youβre done:
# Stop the town's agents
tt stop
# Or just Ctrl+C if running `tt start`
Next Steps
- Core Concepts β Deep dive into Towns, Agents, Tasks
- Multi-Agent Tutorial β More complex coordination patterns
Core Concepts Overview
Tinytown has exactly 5 core types. Thatβs it. No more, no less.
βββββββββββββββββββββββββββββββββββββββββββ
β Your Application β
ββββββββββββββββ¬βββββββββββββββββββββββββββ
β
ββββββββΌββββββββ
β Town β β Orchestrator
ββββββββ¬ββββββββ
β
ββββββββΌβββββββββββββββββββ
β Channel (Redis) β β Message passing
ββββββββ¬βββββββββββββββββββ
β
ββββββββββββΌβββββββββββββββ
βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββ
β Agent β β Agent β .. β Agent β β Workers
βββββ¬βββββ βββββ¬βββββ βββββ¬βββββ
β β β
βΌ βΌ βΌ
Tasks Tasks Tasks β Work units
The 5 Core Types
| Type | What It Is | Redis Key Pattern |
|---|---|---|
| Town | The orchestrator that manages everything | N/A (local) |
| Agent | A worker that executes tasks | tt:<town>:agent:<id> |
| Task | A unit of work with lifecycle | tt:<town>:task:<id> |
| Message | Communication between agents | Transient |
| Channel | Redis-based message transport | tt:<town>:inbox:<id> |
How They Work Together
1. Town Orchestrates
The Town is your control center. It:
- Starts and manages Redis
- Spawns and tracks agents
- Provides the API for coordination
tt init my-project
tt status
2. Agents Execute
Agents are workers. Each agent has:
- A unique ID
- A name (human-readable)
- A model (claude, auggie, codex, etc.)
- A state (starting, idle, working, stopped)
tt spawn worker-1 --model claude
tt list
3. Tasks Represent Work
Tasks are what agents work on. Each task has:
- A description
- A state (pending β assigned β running β completed/failed)
- An assigned agent
tt assign worker-1 "Implement the login API"
tt tasks
4. Messages Coordinate
Messages are how agents communicate. They carry:
- Sender and recipient
- Message type (TaskAssign, TaskDone, StatusRequest, etc.)
- Priority (Low, Normal, High, Urgent)
tt send worker-1 "Please update the README"
tt send worker-1 --urgent "Critical bug in production!"
5. Channel Transports
The Channel is the Redis connection that moves messages. It provides:
- Priority queues (urgent messages go first)
- Blocking receive (agents wait efficiently)
- State persistence (survives restarts)
Mental Model
Think of it like a small company:
| Tinytown | Company Analogy |
|---|---|
| Town | The office building |
| Agent | An employee |
| Task | A work ticket |
| Message | An email/Slack message |
| Channel | The email/Slack system |
Whatβs NOT in Tinytown
Deliberately excluded to keep things simple:
- β Workflow DAGs β Just assign tasks directly
- β Recovery daemons β Redis persistence handles crashes
- β Multi-tier databases β One Redis instance
- β Complex agent hierarchies β All agents are peers
If you need these, you can build them on top of Tinytownβs primitives, or use a more complex system like Gastown.
Next Steps
Deep dive into each type:
Towns
A Town is your orchestration workspace. Itβs the top-level container that manages Redis, agents, and coordination.
What a Town Contains
my-project/ # Town root
βββ tinytown.toml # Configuration
βββ .gitignore # Auto-updated to exclude .tt/
βββ .tt/ # Runtime artifacts (gitignored)
βββ redis.sock # Unix socket (when running)
βββ redis.pid # Redis process ID
βββ redis.aof # Redis persistence (if enabled)
βββ agents/ # Agent working directories
βββ logs/ # Activity logs
βββ tasks/ # Task storage
All runtime artifacts are stored under .tt/ which is automatically added to .gitignore during tt init. This keeps your repository clean and prevents accidental commits of logs, sockets, and other temporary files.
Creating a Town
CLI
tt init --name my-project
Rust API
use tinytown::{Town, Result};
#[tokio::main]
async fn main() -> Result<()> {
// Initialize a new town
let town = Town::init("./my-project", "my-project").await?;
// Town is now running with Redis started
Ok(())
}
Connecting to an Existing Town
#![allow(unused)]
fn main() {
// Connect to existing town (starts Redis if needed)
let town = Town::connect("./my-project").await?;
}
Town Configuration
The tinytown.toml file:
name = "my-project"
default_cli = "claude"
max_agents = 10
[redis]
use_socket = true
socket_path = ".tt/redis.sock"
host = "127.0.0.1"
port = 6379
[agent_clis.claude]
name = "claude"
command = "claude --print"
[agent_clis.auggie]
name = "auggie"
command = "augment"
Configuration Options
| Option | Default | Description |
|---|---|---|
name | Directory name | Human-readable town name |
redis.use_socket | true | Use Unix socket (faster) vs TCP |
redis.socket_path | .tt/redis.sock | Socket file path (under .tt/) |
redis.host | 127.0.0.1 | TCP host (if not using socket) |
redis.port | 6379 | TCP port (if not using socket) |
default_cli | claude | Default CLI for new agents |
max_agents | 10 | Maximum concurrent agents |
Town Lifecycle
βββββββββββββββ
β init() β ββ Creates directories, config, starts Redis
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β running β ββ Agents can be spawned, tasks assigned
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β drop β ββ Redis stopped, cleanup
βββββββββββββββ
Town Methods
#![allow(unused)]
fn main() {
// Spawn a new agent
let agent = town.spawn_agent("worker-1", "claude").await?;
// Get handle to existing agent
let agent = town.agent("worker-1").await?;
// List all agents
let agents = town.list_agents().await;
// Access the channel directly
let channel = town.channel();
// Get configuration
let config = town.config();
// Get root directory
let root = town.root();
}
Redis Management
Tinytown automatically manages Redis:
- On
init(): Startsredis-serverwith Unix socket - On
connect(): Connects to existing, or starts if needed - On
drop: Stops Redis gracefully
Unix Socket vs TCP
Unix sockets are ~10x faster for local communication:
| Mode | Latency | Use Case |
|---|---|---|
| Unix Socket | ~0.1ms | Local development (default) |
| TCP | ~1ms | Remote Redis, Docker |
To use TCP instead:
[redis]
use_socket = false
host = "redis.example.com"
port = 6379
Agents
An Agent is a worker that executes tasks. Agents can be AI models (Claude, Auggie, Codex) or custom processes.
Agent Properties
| Property | Type | Description |
|---|---|---|
id | UUID | Unique identifier |
name | String | Human-readable name |
agent_type | Enum | Worker or Supervisor |
state | Enum | Current lifecycle state |
cli | String | CLI being used (claude, auggie, etc.) |
current_task | Option | Task being worked on |
created_at | DateTime | When agent was created |
last_heartbeat | DateTime | Last activity timestamp |
tasks_completed | u64 | Count of completed tasks |
Agent States
βββββββββββββ
β Starting β ββ Agent is initializing
βββββββ¬ββββββ
β
βΌ
βββββββββββββ βββββββββββββ
β Idle β ββββΊβ Working β ββ Can accept work / Executing task
βββββββ¬ββββββ βββββββββββββ
β
βΌ
βββββββββββββ βββββββββββββ
β Paused β β Error β ββ Temporarily paused / Something went wrong
βββββββ¬ββββββ βββββββββββββ
β
βΌ
βββββββββββββ
β Stopped β ββ Agent has terminated
βββββββββββββ
Creating Agents
CLI
# With default model
tt spawn worker-1
# With specific model
tt spawn worker-1 --model claude
tt spawn worker-2 --model auggie
tt spawn reviewer --model codex
Rust API
#![allow(unused)]
fn main() {
use tinytown::{Town, Agent, AgentType};
let town = Town::connect(".").await?;
// Spawn returns a handle
let handle = town.spawn_agent("worker-1", "claude").await?;
// Handle provides operations
let id = handle.id();
let state = handle.state().await?;
let inbox_len = handle.inbox_len().await?;
}
Built-in Models
Tinytown comes with presets for popular AI coding agents:
| Model | Command | Agent |
|---|---|---|
claude | claude --print | Anthropic Claude |
auggie | augment | Augment Code |
codex | codex | OpenAI Codex |
gemini | gemini | Google Gemini |
copilot | gh copilot | GitHub Copilot |
aider | aider | Aider |
cursor | cursor | Cursor |
Agent Types
Worker (Default)
Workers execute tasks assigned to them:
#![allow(unused)]
fn main() {
let agent = Agent::new("worker-1", "claude", AgentType::Worker);
}
Supervisor
A special agent that coordinates workers:
#![allow(unused)]
fn main() {
let supervisor = Agent::supervisor("coordinator");
}
The supervisor has a well-known ID and can send messages to all workers.
Working with Agent Handles
#![allow(unused)]
fn main() {
let handle = town.spawn_agent("worker-1", "claude").await?;
// Assign a task
let task_id = handle.assign(Task::new("Build the API")).await?;
// Send a message
handle.send(MessageType::StatusRequest).await?;
// Check inbox
let pending = handle.inbox_len().await?;
// Get current state
if let Some(agent) = handle.state().await? {
println!("State: {:?}", agent.state);
println!("Current task: {:?}", agent.current_task);
}
// Wait for completion
handle.wait().await?;
}
Agent Storage in Redis
Agents are persisted in Redis using town-isolated keys:
tt:<town_name>:agent:<uuid> β JSON serialized Agent struct
tt:<town_name>:inbox:<uuid> β List of pending messages
This town-isolated format allows multiple Tinytown projects to share the same Redis instance without key conflicts. See tt migrate for upgrading from older key formats.
This means:
- Agent state survives Redis restarts (with persistence)
- Multiple processes can coordinate via the same town
- Multiple towns can share the same Redis instance
- You can inspect state with
redis-cli
Tasks
A Task is a unit of work that can be assigned to an agent.
Task Properties
| Property | Type | Description |
|---|---|---|
id | UUID | Unique identifier |
description | String | What needs to be done |
state | Enum | Current lifecycle state |
assigned_to | Option | Agent working on this |
created_at | DateTime | When created |
updated_at | DateTime | Last modification |
completed_at | Option | When finished |
result | Option | Output or error message |
parent_id | Option | For hierarchical tasks |
tags | Vec | Labels for filtering |
Task States
βββββββββββ
β Pending β ββ Created, waiting for assignment
ββββββ¬βββββ
β assign()
βΌ
ββββββββββββ
β Assigned β ββ Given to an agent
ββββββ¬ββββββ
β start()
βΌ
βββββββββββ
β Running β ββ Agent is working on it
ββββββ¬βββββ
β
ββββ complete() βββΊ βββββββββββββ
β β Completed β β
β βββββββββββββ
β
ββββ fail() βββββββΊ ββββββββββ
β β Failed β β
β ββββββββββ
β
ββββ cancel() βββββΊ βββββββββββββ
β Cancelled β β
βββββββββββββ
Creating Tasks
CLI
# Assign a task directly to an agent
tt assign worker-1 "Implement user authentication"
# Check pending tasks
tt tasks
Using tasks.toml (Recommended)
Define tasks in a file for version control and batch assignment:
[[tasks]]
id = "auth-api"
description = "Implement user authentication"
agent = "backend"
status = "pending"
tags = ["auth", "api"]
[[tasks]]
id = "auth-tests"
description = "Write tests for auth API"
agent = "tester"
status = "pending"
parent = "auth-api"
Then sync to Redis:
tt sync push
See tt plan for the full task DSL.
Task Lifecycle
Tasks move through states automatically as agents work on them:
- Pending β Created, waiting for assignment
- Assigned β Given to an agent via
tt assign - Running β Agent is actively working
- Completed/Failed/Cancelled β Terminal states
Check task state with:
tt tasks
Or inspect directly in Redis:
redis-cli -s ./redis.sock GET "tt:<town_name>:task:<uuid>"
Task Storage in Redis
Tasks are stored as JSON using town-isolated keys:
tt:<town_name>:task:<uuid> β JSON serialized Task struct
This allows multiple towns to share the same Redis instance. You can inspect tasks directly:
redis-cli -s ./redis.sock GET "tt:<town_name>:task:550e8400-e29b-41d4-a716-446655440000"
See tt migrate for upgrading from older key formats.
Hierarchical Tasks
Create parent-child relationships in tasks.toml:
[[tasks]]
id = "user-system"
description = "User Management System"
agent = "architect"
status = "pending"
[[tasks]]
id = "signup"
description = "User signup flow"
parent = "user-system"
agent = "backend"
status = "pending"
[[tasks]]
id = "login"
description = "User login flow"
parent = "user-system"
agent = "backend"
status = "pending"
Task Tags
Use tags to categorize and filter:
[[tasks]]
id = "fix-xss"
description = "Fix XSS vulnerability"
status = "pending"
tags = ["security", "bug", "P0"]
Comparison with Gastown Beads
| Feature | Tinytown Tasks | Gastown Beads |
|---|---|---|
| Storage | Redis (JSON) | Dolt SQL |
| Hierarchy | Optional parent_id | Full graph |
| Metadata | Tags array | Full schema |
| Persistence | Redis persistence | Git-backed |
| Complexity | Simple | Complex |
Tinytown tasks are intentionally simpler. If you need Gastownβs bead features (dependency graphs, git versioning, SQL queries), consider using Gastown or building on top of Tinytown.
Messages
Messages are how agents communicate. Theyβre the envelopes that carry task assignments, status updates, and coordination signals.
Message Properties
| Property | Type | Description |
|---|---|---|
id | UUID | Unique message identifier |
from | AgentId | Sender |
to | AgentId | Recipient |
msg_type | Enum | Type and payload |
priority | Enum | Processing order |
created_at | DateTime | When sent |
correlation_id | Option | For request/response |
Message Types
#![allow(unused)]
fn main() {
pub enum MessageType {
// Semantic message types (for inter-agent communication)
Task { description: String }, // Actionable work request
Query { question: String }, // Question expecting response
Informational { summary: String }, // FYI/context update
Confirmation { ack_type: ConfirmationType }, // Receipt/acknowledgment
// Task management
TaskAssign { task_id: String },
TaskDone { task_id: String, result: String },
TaskFailed { task_id: String, error: String },
// Status
StatusRequest,
StatusResponse { state: String, current_task: Option<String> },
// Lifecycle
Ping,
Pong,
Shutdown,
// Extensibility
Custom { kind: String, payload: String },
}
pub enum ConfirmationType {
Received, // Message was received
Acknowledged, // Message was acknowledged
Thanks, // Message expressing thanks
Approved, // Approval confirmation
Rejected { reason: String }, // Rejection with reason
}
}
Semantic Message Classification
Messages are classified as either actionable or informational:
| Actionable (require work) | Informational (context only) |
|---|---|
Task, Query, TaskAssign | Informational, Confirmation |
StatusRequest, Ping | TaskDone, TaskFailed |
Shutdown, Custom | StatusResponse, Pong |
Use msg.is_actionable() and msg.is_informational_or_confirmation() helpers to classify.
## Priority Levels
Messages are processed by priority:
| Priority | Behavior |
|----------|----------|
| `Urgent` | Goes to front of queue, interrupt current work |
| `High` | Goes to front of queue |
| `Normal` | Goes to back of queue (default) |
| `Low` | Goes to back, processed when idle |
```rust
let msg = Message::new(from, to, MessageType::Shutdown)
.with_priority(Priority::Urgent);
Creating Messages
#![allow(unused)]
fn main() {
use tinytown::{Message, MessageType, AgentId, Priority};
// Basic message
let msg = Message::new(
AgentId::supervisor(), // from
worker_id, // to
MessageType::TaskAssign { task_id: "abc123".into() }
);
// With priority
let urgent = Message::new(from, to, MessageType::Shutdown)
.with_priority(Priority::Urgent);
// With correlation (for request/response)
let request = Message::new(from, to, MessageType::StatusRequest);
let response = Message::new(to, from, MessageType::StatusResponse {
state: "working".into(),
current_task: Some("task-123".into())
}).with_correlation(request.id);
}
Sending Messages
Via Channel:
#![allow(unused)]
fn main() {
let channel = town.channel();
channel.send(&message).await?;
}
Via AgentHandle:
#![allow(unused)]
fn main() {
let handle = town.agent("worker-1").await?;
handle.send(MessageType::StatusRequest).await?;
}
Receiving Messages
#![allow(unused)]
fn main() {
use std::time::Duration;
// Blocking receive (waits up to timeout)
if let Some(msg) = channel.receive(agent_id, Duration::from_secs(30)).await? {
match msg.msg_type {
MessageType::TaskAssign { task_id } => {
println!("Got task: {}", task_id);
}
MessageType::Shutdown => {
println!("Shutting down");
break;
}
_ => {}
}
}
// Non-blocking receive
if let Some(msg) = channel.try_receive(agent_id).await? {
// Process message
}
}
Broadcasting
Send to all agents:
#![allow(unused)]
fn main() {
let broadcast = Message::new(
AgentId::supervisor(),
AgentId::supervisor(), // Placeholder, broadcast ignores this
MessageType::Shutdown
);
channel.broadcast(&broadcast).await?;
}
Custom Messages
For application-specific communication:
#![allow(unused)]
fn main() {
// Define your payload as JSON
let payload = serde_json::json!({
"pr_url": "https://github.com/...",
"files_changed": ["src/auth.rs", "tests/auth_test.rs"]
});
let msg = Message::new(from, to, MessageType::Custom {
kind: "pr_ready".into(),
payload: payload.to_string()
});
}
Message Flow Example
Supervisor Worker
β β
β TaskAssign{task-123} β
β ββββββββββββββββββββββββββΊβ
β β
β β (working...)
β β
β TaskDone{task-123} β
βββββββββββββββββββββββββββ β
β β
Comparison with Gastown Mail
| Feature | Tinytown Messages | Gastown Mail |
|---|---|---|
| Transport | Redis lists | Beads (git-backed) |
| Persistence | Redis persistence | Git commits |
| Priority | 4 levels | Yes |
| Routing | Direct to inbox | Complex routing |
| Recovery | Redis replays | Event replay |
Tinytown messages are ephemeral by default (in Redis memory). Enable Redis persistence for durability.
Channels
The Channel is Tinytownβs message transport layer. Itβs a thin wrapper around Redis that provides queues, pub/sub, and state storage.
Why Redis?
Redis is perfect for agent orchestration:
| Feature | Benefit |
|---|---|
| Unix sockets | Sub-millisecond latency |
| Lists | Perfect for message queues |
| BLPOP | Efficient blocking receive |
| Pub/Sub | Broadcast to all agents |
| Persistence | Survives crashes |
| Simple | No complex setup |
Channel Operations
Send a Message
#![allow(unused)]
fn main() {
channel.send(&message).await?;
}
Messages go to the recipientβs inbox (tt:<town>:inbox:<agent-id>).
Priority handling:
Urgent/HighβLPUSH(front of queue)Normal/LowβRPUSH(back of queue)
Receive a Message
#![allow(unused)]
fn main() {
// Blocking (waits up to timeout)
let msg = channel.receive(agent_id, Duration::from_secs(30)).await?;
// Non-blocking
let msg = channel.try_receive(agent_id).await?;
}
Uses BLPOP for efficient waiting without polling.
Check Inbox Length
#![allow(unused)]
fn main() {
let pending = channel.inbox_len(agent_id).await?;
println!("{} messages waiting", pending);
}
Broadcast
#![allow(unused)]
fn main() {
channel.broadcast(&message).await?;
}
Uses Redis Pub/Sub (PUBLISH tt:broadcast).
State Storage
The channel also stores agent and task state:
Agent State
#![allow(unused)]
fn main() {
// Store
channel.set_agent_state(&agent).await?;
// Retrieve
let agent = channel.get_agent_state(agent_id).await?;
}
Stored at: tt:<town>:agent:<uuid>
Task State
#![allow(unused)]
fn main() {
// Store
channel.set_task(&task).await?;
// Retrieve
let task = channel.get_task(task_id).await?;
}
Stored at: tt:<town>:task:<uuid>
Redis Key Patterns
Keys are town-isolated to allow multiple towns to share the same Redis instance:
| Pattern | Type | Purpose |
|---|---|---|
tt:<town>:inbox:<uuid> | List | Agent message queue |
tt:<town>:agent:<uuid> | String | Agent state (JSON) |
tt:<town>:task:<uuid> | String | Task state (JSON) |
tt:broadcast | Pub/Sub | Broadcast channel |
See tt migrate for upgrading from older key formats.
Direct Redis Access
Sometimes you want to query Redis directly:
# Connect to town's Redis
redis-cli -s ./redis.sock
# List all agent inboxes for your town
KEYS tt:<town_name>:inbox:*
# Check inbox length
LLEN tt:<town_name>:inbox:550e8400-e29b-41d4-a716-446655440000
# View agent state
GET tt:<town_name>:agent:550e8400-e29b-41d4-a716-446655440000
# Monitor all messages
MONITOR
Performance
Unix socket performance is excellent:
| Operation | Latency |
|---|---|
| Send message | ~0.1ms |
| Receive (cached) | ~0.1ms |
| State get/set | ~0.1ms |
| TCP equivalent | ~1-2ms |
For local development, this means near-instant coordination.
Persistence
By default, Redis runs in-memory only. For durability:
Option 1: RDB Snapshots
redis-cli -s ./redis.sock CONFIG SET save "60 1"
Saves every 60 seconds if at least 1 key changed.
Option 2: AOF (Append Only File)
redis-cli -s ./redis.sock CONFIG SET appendonly yes
Logs every write for full durability.
Creating a Channel
Usually you donβt create channels directlyβthe Town does it:
#![allow(unused)]
fn main() {
// Get channel from town
let channel = town.channel();
// Or create manually (advanced)
use redis::aio::ConnectionManager;
let client = redis::Client::open("unix:///path/to/redis.sock")?;
let conn = ConnectionManager::new(client).await?;
let channel = Channel::new(conn);
}
Agent Coordination
How agents work together and decide when tasks are complete.
The Simple Model
Tinytown keeps coordination simple:
- Conductor orchestrates (spawns agents, assigns tasks)
- Workers do the work
- Reviewer decides when work is done
- Conductor monitors and coordinates handoffs
The Reviewer Pattern
Always include a reviewer agent. Theyβre your quality gate:
ββββββββββββββ work ββββββββββββββ
β Conductor β βββββββββββββΊ β Worker β
βββββββ¬βββββββ βββββββ¬βββββββ
β β
β β completes
β βΌ
β review request ββββββββββββββ
β βββββββββββββββββββββΊβ Reviewer β
β βββββββ¬βββββββ
β β
ββββββββββββββββββββββββββββββ
β approve / reject
β
βΌ
Done (or assign fixes)
Why a Reviewer?
Without a reviewer, who decides βdoneβ?
| Approach | Problem |
|---|---|
| Worker decides | βIβm doneβ but is it good? |
| Conductor decides | Conductor may not understand domain |
| User decides | User has to check everything |
| Reviewer decides | β Separation of concerns |
The reviewer pattern is used everywhere: code review, QA, editing. It works.
How It Works in Practice
1. Conductor Spawns Team
tt spawn backend
tt spawn frontend
tt spawn reviewer # Always include!
2. Workers Work
tt assign backend "Build the API"
tt assign frontend "Build the UI"
3. Conductor Requests Review
When tt status shows workers are idle:
tt assign reviewer "Review the API implementation. Check security, error handling, tests. Approve or list what needs fixing."
4. Reviewer Responds
The reviewer either:
- Approves: βLGTM, API is solidβ
- Requests changes: βPassword hashing uses weak algorithm, fix neededβ
5. Conductor Acts
- If approved β task is done
- If changes needed β
tt assign backend "Fix: use bcrypt instead of md5"
Messages Between Agents
Agents can send messages directly via their inboxes:
#![allow(unused)]
fn main() {
// In code (for custom integrations)
let msg = Message::new(worker_id, reviewer_id, MessageType::Custom {
kind: "ready_for_review".into(),
payload: r#"{"files": ["src/api.rs"]}"#.into(),
});
channel.send(&msg).await?;
}
But for simplicity, the conductor handles coordination. Agents donβt need to message each other directlyβthe conductor assigns review tasks when workers are done.
Keeping It Simple
Tinytown deliberately avoids:
- β Complex state machines
- β Automatic dependency resolution
- β Event-driven triggers
Instead:
- β
Conductor checks
tt status - β Conductor assigns next task
- β Reviewer is the quality gate
This is explicit and easy to understand. You always know whatβs happening.
Comparison with Gastown
| Aspect | Gastown | Tinytown |
|---|---|---|
| Coordination | Mayor + Witness + Hooks | Conductor + Reviewer |
| Completion | Complex bead states | Reviewer approves |
| Automation | Event-driven | Conductor-driven |
| Complexity | High | Low |
Gastown automates more but is harder to understand. Tinytown is explicit.
Mission Mode
Mission Mode enables autonomous, dependency-aware orchestration of multiple GitHub issues with automatic PR/CI monitoring.
Overview
While regular Tinytown tasks are great for individual work items, mission mode is designed for larger objectives that span multiple issues, require dependency tracking, and need ongoing monitoring of external events like CI status and code reviews.
Think of a mission as a βproject managerβ that:
- Accepts multiple GitHub issues as objectives
- Builds a dependency-aware execution plan (DAG)
- Delegates work to best-fit agents automatically
- Monitors PRs, CI, and review status
- Persists state for restart/resume capability
Core Concepts
MissionRun
The top-level orchestration record. A MissionRun owns:
- Objectives: GitHub issues or documents to complete
- Work Items: Individual tasks extracted from objectives
- Watch Items: Monitoring tasks for PRs/CI
- Policy: Execution rules (parallelism, review gates, etc.)
Mission States
ββββββββββββ
β Planning β ββ Compiling work graph from objectives
ββββββ¬ββββββ
β
βΌ
ββββββββββββ
β Running β ββ Active execution
ββββββ¬ββββββ
β
ββββΊ ββββββββββββ
β β Blocked β ββ Waiting on external event
β ββββββββββββ
β
ββββΊ βββββββββββββ
β β Completed β β All objectives done
β βββββββββββββ
β
ββββΊ ββββββββββ
β Failed β β Unrecoverable error
ββββββββββ
Work Items
Individual units of work in the mission DAG. Each work item:
- Has a status:
pendingβreadyβassignedβrunningβdone - May depend on other work items
- Gets assigned to an agent based on role fit
Watch Items
Scheduled monitoring tasks that poll for external events:
- PR Checks: CI pass/fail status
- Reviews: Human review comments
- Bugbot: Automated security reports
- Mergeability: Conflict and merge status
Dependency Detection
The mission compiler parses issue bodies for dependency markers:
<!-- In your GitHub issue body -->
This feature depends on #42.
After #41 is complete, we can start this.
Blocked by #40.
Supported patterns:
depends on #Nafter #Nblocked by #Nrequires #N
Mission Policy
Control execution behavior with policy settings:
| Setting | Default | Description |
|---|---|---|
max_parallel_items | 2 | Max concurrent work items |
reviewer_required | true | Require review before merge |
auto_merge | false | Merge PRs automatically on approval |
watch_interval_secs | 180 | How often to poll PR/CI status |
Example Workflow
# Start a mission from multiple issues
tt mission start --issue 23 --issue 24 --issue 25
# Check mission status
tt mission status
# View detailed work items
tt mission status --work
# Stop a mission gracefully
tt mission stop <run-id>
# Resume a stopped mission
tt mission resume <run-id>
Scheduler Loop
The scheduler runs every 30 seconds (configurable) and:
- Loads active missions from Redis
- Checks due watch items, executes triggers
- Promotes pending work items to ready when dependencies satisfied
- Matches ready items to idle agents by role fit
- Enforces reviewer gates before advancing
- Marks mission completed when no items remain
Agent Routing
Work items are matched to agents using role-fit scoring:
- Exact match:
owner_role: "backend"β agent with backend role - Generic fallback: Any idle worker
- Load balancing: Avoid assigning too many items to one agent
- Reviewer reservation: Keep reviewer available for gates
Redis Storage
Missions persist in Redis with these keys:
tt:{town}:mission:{run_id} # MissionRun metadata
tt:{town}:mission:{run_id}:work # WorkItem collection
tt:{town}:mission:{run_id}:watch # WatchItem collection
tt:{town}:mission:{run_id}:events # Activity log (last 100)
tt:{town}:mission:active # Set of active MissionIds
See Also
Tutorial: Single Agent Workflow
Letβs build a complete workflow with one agent doing a coding task.
What Weβll Build
A simple system that:
- Creates a coding task
- Assigns it to an agent
- Waits for completion
- Reports the result
Setup
mkdir single-agent-demo && cd single-agent-demo
tt init --name demo
The Code
Create main.rs:
use tinytown::{Town, Task, Result};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<()> {
// Connect to town
let town = Town::connect(".").await?;
// Create an agent
let agent = town.spawn_agent("coder", "claude").await?;
println!("π€ Spawned agent: {}", agent.id());
// Create a task
let task = Task::new(
"Create a Rust function that calculates fibonacci numbers recursively"
);
println!("π Created task: {}", task.id);
// Assign to agent
let task_id = agent.assign(task).await?;
println!("β
Assigned task {} to coder", task_id);
// Check status periodically
loop {
tokio::time::sleep(Duration::from_secs(5)).await;
if let Some(state) = agent.state().await? {
println!(" Agent state: {:?}", state.state);
match state.state {
tinytown::AgentState::Idle => {
println!("π Agent completed work!");
break;
}
tinytown::AgentState::Error => {
println!("β Agent encountered error");
break;
}
_ => continue,
}
}
}
// Get task result
if let Some(task) = town.channel().get_task(task_id).await? {
println!("\nπ Task result:");
println!(" State: {:?}", task.state);
if let Some(result) = task.result {
println!(" Output: {}", result);
}
}
Ok(())
}
Running It
# In terminal 1: Keep the town running
tt start
# In terminal 2: Run your code
cargo run
What Happens
- Town connects to the existing town (and its Redis)
- Agent spawns with state
StartingβIdle - Task creates with state
Pending - Assignment sends a
TaskAssignmessage to agentβs inbox - Agent receives the message (in a real setup, Claude would process it)
- Polling checks agent state every 5 seconds
- Completion when agent returns to
Idle
The Message Flow
Your Code Redis Agent
β β β
β spawn_agent() β β
β ββββββββββββββββββββββββββΊβ β
β β SET tt:<town>:agent:xxx β
β β ββββββββββββββββββββββββββββββββββ
β β β
β assign(task) β β
β ββββββββββββββββββββββββββΊβ β
β β SET tt:<town>:task:yyy β
β β RPUSH tt:<town>:inbox:xxx β
β β ββββββββββββββββββββββββββββββββββ
β β β
β β BLPOP tt:<town>:inbox:xxx β
β ββββββββββββββββββββββββββββββββββββ
β β β
β state() β β
β ββββββββββββββββββββββββββΊβ β
β β GET tt:<town>:agent:xxx β
βββββββββββββββββββββββββββ β β
Simulating the Agent
In a real workflow, Claude (or another AI) receives the task. For testing, you can simulate completion:
# In redis-cli (replace <town> with your town name)
redis-cli -s ./redis.sock
# Get the inbox message
LPOP tt:<town>:inbox:550e8400-e29b-41d4-a716-446655440000
# Update agent state to idle
# (In practice, the agent process does this)
Key Takeaways
- Towns manage Redis β You donβt need to start it manually
- Agents are stateful β Their state persists in Redis
- Tasks are tracked β Full lifecycle from pending to complete
- Messages are reliable β Redis lists ensure delivery
Next Steps
- Multi-Agent Coordination β Coordinate multiple agents
- Task Pipelines β Chain tasks together
Tutorial: Multi-Agent Coordination
Letβs coordinate multiple agents working together on a feature.
The Scenario
Weβll build a system where:
- Architect designs the API
- Developer implements it
- Tester writes tests
- Reviewer reviews everything
Setup
mkdir multi-agent-demo && cd multi-agent-demo
tt init --name multi-demo
Spawning the Team
tt spawn architect --model claude
tt spawn developer --model auggie
tt spawn tester --model codex
tt spawn reviewer --model claude
Check your team:
tt list
Sequential Pipeline with tasks.toml
Define your workflow in tasks.toml:
[meta]
description = "Auth API Pipeline"
[[tasks]]
id = "design"
description = "Design a REST API for user authentication with JWT tokens"
agent = "architect"
status = "pending"
[[tasks]]
id = "implement"
description = "Implement the auth API from the architect's design"
agent = "developer"
status = "pending"
parent = "design"
[[tasks]]
id = "test"
description = "Write comprehensive tests for the auth API"
agent = "tester"
status = "pending"
parent = "implement"
Then sync to Redis and let the conductor orchestrate:
tt sync push
tt conductor
Parallel Execution
When tasks are independent, assign them all at once:
# Spawn agents
tt spawn frontend --model claude
tt spawn backend --model auggie
tt spawn docs --model codex
# Assign tasks (they run in parallel)
tt assign frontend "Build the login UI"
tt assign backend "Build the auth API"
tt assign docs "Write API documentation"
# Monitor progress
tt status
Fan-Out / Fan-In
Use the conductor for complex workflows:
# Spawn workers
tt spawn worker-1 --model claude
tt spawn worker-2 --model claude
tt spawn worker-3 --model claude
tt spawn reviewer --model claude
# Assign work to all workers
tt assign worker-1 "Implement module A"
tt assign worker-2 "Implement module B"
tt assign worker-3 "Implement module C"
# Monitor until all complete
tt status
# Then aggregate with reviewer
tt assign reviewer "Review modules A, B, and C for consistency"
Agent-to-Agent Communication
Agents can send messages to each other:
# Send a message to another agent
tt send reviewer "Auth API implementation complete. Ready for review."
# Send an urgent message
tt send reviewer --urgent "Critical bug found in module A!"
Comparison with Gastown
| Pattern | Tinytown | Gastown |
|---|---|---|
| Sequential | wait_for_idle() loop | Convoy + Beads events |
| Parallel | tokio::join! | Mayor distributes |
| Fan-out/in | Manual coordination | Convoy tracking |
| Messaging | Direct channel.send() | Mail protocol |
Tinytown is more explicitβyou write the coordination logic. Gastown abstracts it with Convoys and the Mayor. Choose based on your needs.
Next Steps
- Task Pipelines β Build complex workflows
- Error Handling β Handle failures gracefully
Tutorial: Task Pipelines
Build structured workflows with task dependencies and hierarchies.
What Weβll Build
A code review pipeline:
- Developer writes code
- Linter checks style
- Tester writes tests
- Reviewer approves
- Merger deploys
Pipeline with tasks.toml
Define your pipeline in tasks.toml:
[meta]
description = "User Profile Feature Pipeline"
# Epic (parent task)
[[tasks]]
id = "profile-epic"
description = "Implement user profile feature"
status = "pending"
tags = ["epic", "q1-2024"]
# Subtasks under the epic
[[tasks]]
id = "design"
description = "Design profile API schema"
agent = "architect"
status = "pending"
parent = "profile-epic"
tags = ["design"]
[[tasks]]
id = "implement"
description = "Implement profile endpoints"
agent = "developer"
status = "pending"
parent = "profile-epic"
tags = ["backend"]
[[tasks]]
id = "test"
description = "Write profile API tests"
agent = "tester"
status = "pending"
parent = "profile-epic"
tags = ["testing"]
[[tasks]]
id = "review"
description = "Review profile implementation"
agent = "reviewer"
status = "pending"
parent = "profile-epic"
tags = ["review"]
Run the pipeline:
# Initialize the plan
tt plan --init
# Spawn the team
tt spawn architect --model claude
tt spawn developer --model auggie
tt spawn tester --model codex
tt spawn reviewer --model claude
# Push tasks to Redis
tt sync push
# Start the conductor to orchestrate
tt conductor
Sequential Pipeline via CLI
For simple sequential workflows:
# Stage 1: Design
tt assign architect "Design the feature architecture"
# Wait for completion, then...
# Stage 2: Implement
tt assign developer "Implement the feature"
# Wait for completion, then...
# Stage 3: Test
tt assign tester "Write tests for the feature"
# Wait for completion, then...
# Stage 4: Review
tt assign reviewer "Review the implementation"
Use tt status to monitor progress between stages.
Multi-Stage Pipeline Example
A complete tasks.toml for a CI/CD-like pipeline:
[meta]
description = "Code Review Pipeline"
default_agent = "developer"
[[tasks]]
id = "lint"
description = "Run linting on src/"
agent = "linter"
status = "pending"
[[tasks]]
id = "build"
description = "Build the project"
agent = "builder"
status = "pending"
parent = "lint"
[[tasks]]
id = "test"
description = "Run test suite"
agent = "tester"
status = "pending"
parent = "build"
[[tasks]]
id = "review"
description = "Code review"
agent = "reviewer"
status = "pending"
parent = "test"
[[tasks]]
id = "deploy"
description = "Deploy to staging"
agent = "deployer"
status = "pending"
parent = "review"
Best Practices
- Use parent tasks for grouping related work
- Tag tasks for easy filtering and reporting
- Keep stages small β easier to retry and debug
- Log stage transitions β helps troubleshooting
- Handle failures gracefully β donβt crash the whole pipeline
Next Steps
Tutorial: Error Handling & Recovery
Things go wrong. Agents crash, tasks fail, Redis restarts. Hereβs how to handle it.
Checking Agent Health
Use CLI commands to monitor agent state:
# Check all agents
tt status
# List agents and their states
tt list
# Check pending tasks
tt tasks
Agent states to watch for:
- Idle β Ready for work β
- Working β Busy but healthy β
- Error β Something went wrong β
- Stopped β Agent terminated β
Checking Task State
View task status with the CLI:
# See all pending tasks by agent
tt tasks
# Check a specific agent's inbox
tt inbox <agent-name>
Respawning Failed Agents
If an agent dies, spawn a new one:
# Check if agent exists
tt list
# If stopped or missing, respawn it
tt spawn worker-1 --model claude
# Or prune stale agents first
tt prune
tt spawn worker-1 --model claude
Graceful Shutdown
To stop agents gracefully:
# Stop a specific agent
tt kill worker-1
# Stop the entire town (saves state first)
tt save
tt stop
Recovery Checklist
When things go wrong:
-
Check Redis β Is
redis-serverrunning?redis-cli -s ./redis.sock PING -
Check agent state β What state is it in?
tt list tt status -
Check inbox β Are messages stuck?
tt inbox <agent-name> -
Check tasks β What tasks are pending?
tt tasks -
Check logs β Look in
logs/directory
Comparison with Gastown Recovery
| Feature | Tinytown | Gastown |
|---|---|---|
| Auto-recovery | Manual (you write it) | Witness patrol |
| State persistence | Redis | Git-backed beads |
| Crash detection | Check agent state | Boot/Deacon monitors |
| Work resumption | Reassign tasks | Hook-based (automatic) |
Tinytown puts you in control. Gastown automates more but is more complex. Choose based on your reliability requirements.
Tutorial: Mission Mode
Orchestrate multiple GitHub issues as a single autonomous mission.
What Weβll Build
A mission that implements a user authentication feature spanning three issues:
- Issue #1: Design auth API
- Issue #2: Implement auth endpoints
- Issue #3: Write auth tests
The issues have natural dependencies: design β implement β test.
Prerequisites
- A running Tinytown instance (
tt init, Redis available) - GitHub issues created with dependency markers
- Multiple agents spawned with appropriate roles
Step 1: Create Issues with Dependencies
In your GitHub issues, use dependency markers:
Issue #1: Design auth API
Design the authentication API schema.
- Define login/logout endpoints
- Document token format
- Specify error responses
Issue #2: Implement auth endpoints
Implement the authentication endpoints.
Depends on #1.
- Implement login endpoint
- Implement logout endpoint
- Add token validation
Issue #3: Write auth tests
Write comprehensive tests for auth API.
After #2.
- Unit tests for token validation
- Integration tests for login flow
- Error case coverage
Step 2: Spawn Your Team
# Spawn agents with appropriate roles
tt spawn designer --model claude
tt spawn backend --model auggie
tt spawn tester --model codex
Step 3: Start the Mission
tt mission start --issue 1 --issue 2 --issue 3
Output:
π Mission started: abc123-def456-...
π Objectives: 3 issues
π¦ Work items: 3
β³ Issue #1: Design auth API
β³ Issue #2: Implement auth endpoints
β³ Issue #3: Write auth tests
Step 4: Monitor Progress
# Check overall status
tt mission status
# Detailed work item view
tt mission status --work
# Watch PR/CI monitors
tt mission status --work --watch
Step 5: Understand the Scheduler
The mission scheduler runs every 30 seconds and:
- Promotes work items: Issue #1 starts immediately (no deps)
- Assigns to agents: Designer gets Issue #1
- Monitors completion: When #1 done, #2 becomes ready
- Watches PRs: Creates watch items for CI status
- Enforces gates: Reviewer approval before merge
Round 1: Issue #1 β ready β assigned to designer
Round 5: Issue #1 done β Issue #2 ready β assigned to backend
Round 10: Issue #2 done β Issue #3 ready β assigned to tester
Round 15: All done β Mission completed
Step 6: Handle Blocking
If CI fails or review is needed:
# Check why mission is blocked
tt mission status --watch
# Output shows:
# π§ Watch Items: 1
# β οΈ PR #42 CI check: failing (retrying in 180s)
The mission will:
- Auto-retry CI checks
- Create fix tasks if bugbot comments
- Wait for human review if reviewer_required
Step 7: Stop and Resume
# Pause the mission (can resume later)
tt mission stop abc123
# Resume when ready
tt mission resume abc123
Advanced: Custom Policy
# More parallelism
tt mission start -i 1 -i 2 -i 3 --max-parallel 4
# Skip reviewer (for drafts/experiments)
tt mission start -i 1 --no-reviewer
Advanced: Mission Manifest
For complex projects, create mission.toml:
# Override issue handling
[[overrides]]
issue = 1
owner_role = "architect"
priority = 10
[[overrides]]
issue = 2
depends_on = [1]
owner_role = "backend"
[[overrides]]
issue = 3
skip = true # Exclude from mission
Then reference it (feature coming soon).
Troubleshooting
| Problem | Solution |
|---|---|
| Mission stuck in Planning | Check if issues are accessible |
| Work item never ready | Verify dependency markers parsed |
| Agent not assigned | Spawn agent with matching role |
| CI watch failing | Check GitHub API permissions |
Best Practices
- Use clear dependency markers:
depends on #Nin issue body - Keep issues focused: One objective per issue
- Role-tag your agents: Match agent roles to work types
- Monitor actively: Use
--workflag to see progress - Set appropriate parallelism: Donβt overwhelm your agents
Next Steps
tt bootstrap
Download and build Redis using an AI coding agent.
Synopsis
tt bootstrap [VERSION] [OPTIONS]
Description
Bootstraps Redis by delegating to an AI coding agent. The agent:
- Fetches the release info from https://github.com/redis/redis/releases
- Downloads the source tarball
- Builds Redis from source (
make) - Installs binaries to
~/.tt/bin/
This gets you the latest Redis compiled and optimized for your machine.
Arguments
| Argument | Description |
|---|---|
[VERSION] | Redis version to install (default: latest) |
Options
| Option | Short | Description |
|---|---|---|
--model <CLI> | -m | AI CLI to use (default: claude) |
Examples
Install Latest Redis
tt bootstrap
Install Specific Version
tt bootstrap 8.0.2
Use Different AI CLI
tt bootstrap --model auggie
tt bootstrap --model codex
Output
π Bootstrapping Redis latest to /Users/you/.tt
Using claude to download and build Redis...
π Running: claude --print --dangerously-skip-permissions < ~/.tt/bootstrap_prompt.md
(This may take a few minutes to download and compile)
[Agent output as it downloads and builds...]
β
Redis installed successfully!
Add to your PATH:
export PATH="/Users/you/.tt/bin:$PATH"
Or add to ~/.zshrc or ~/.bashrc for persistence.
Then run: tt init
After Bootstrap
Add Redis to your PATH:
# Add to current session
export PATH="$HOME/.tt/bin:$PATH"
# Add to shell permanently
echo 'export PATH="$HOME/.tt/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
# Verify
redis-server --version
Then initialize a town:
tt init
Why Bootstrap?
| Method | Pros | Cons |
|---|---|---|
| tt bootstrap | Latest version, optimized for your CPU | Takes a few minutes to build |
brew install redis | Quick, easy | May not have latest 8.0+ |
apt install redis | System package | Often outdated version |
Alternative Installation Methods
If bootstrap fails or you prefer package managers:
macOS (Homebrew)
brew install redis
Ubuntu/Debian
sudo apt update
sudo apt install redis-server
From Source (Manual)
curl -LO https://github.com/redis/redis/archive/refs/tags/8.0.2.tar.gz
tar xzf 8.0.2.tar.gz
cd redis-8.0.2
make
sudo make install
See Also
- tt init β Initialize a town
- Installation Guide
tt init
Initialize a new town.
Synopsis
tt init [OPTIONS]
Description
Creates a new Tinytown workspace in the current directory. This:
- Creates the directory structure (
agents/,logs/,tasks/) - Generates
tinytown.tomlconfiguration - Starts a Redis server with Unix socket
- Verifies Redis 8.0+ is installed
Options
| Option | Short | Description |
|---|---|---|
--name <NAME> | -n | Town name (defaults to <repo>-<branch>) |
--town <PATH> | -t | Town directory (defaults to .) |
--verbose | -v | Enable verbose logging |
Default Name
If --name is not provided, the town name is automatically derived from:
- Git repo + branch:
<repo-name>-<branch-name>(e.g.,redisearch-feature-auth) - Git repo only: If no branch is available
- Directory name: Fallback if not in a git repo
This makes it easy to have unique town names per feature branch.
Examples
Basic Initialization (Auto-Named)
cd ~/git/my-project
git checkout feature-auth
tt init
# Town name: my-project-feature-auth
With Custom Name
tt init --name "My Awesome Project"
Initialize in Different Directory
tt init --town ./projects/new-project --name new-project
Output
β¨ Initialized town 'my-project' at .
π‘ Redis running with Unix socket for fast message passing
π Run 'tt spawn <name>' to create agents
Files Created
my-project/
βββ tinytown.toml # Configuration
βββ agents/ # Agent working directories
βββ logs/ # Activity logs
βββ tasks/ # Task storage
Configuration
The generated tinytown.toml:
name = "my-project"
default_cli = "claude"
max_agents = 10
[redis]
use_socket = true
socket_path = "redis.sock"
[agent_clis.claude]
name = "claude"
command = "claude --print"
[agent_clis.auggie]
name = "auggie"
command = "augment"
[agent_clis.codex]
name = "codex"
command = "codex"
Errors
Redis Not Found
Error: Redis not found. Please install Redis 8.0+ and ensure 'redis-server' is on your PATH.
See: https://redis.io/downloads/
Solution: Install Redis 8.0+ and add to PATH.
Redis Version Too Old
Error: Redis version 7.4 is too old. Tinytown requires Redis 8.0 or later.
See: https://redis.io/downloads/
Solution: Upgrade to Redis 8.0+.
Directory Already Initialized
If tinytown.toml already exists, init will fail. Use tt start to connect to an existing town.
See Also
- tt start β Start an existing town
- tt spawn β Create agents
- Installation Guide
tt spawn
Create and start a new agent.
Synopsis
tt spawn <NAME> [OPTIONS]
Description
Spawns a new worker agent in the town. This actually starts an AI process!
The agent:
- Registers in Redis with state
Starting - Starts a background process (or foreground with
--foreground) - Runs in a loop, checking inbox for tasks
- Executes the AI model (claude, auggie, etc.) for each task
- Stops after
--max-roundsiterations
Arguments
| Argument | Description |
|---|---|
<NAME> | Human-readable agent name (e.g., worker-1, backend, reviewer) |
Options
| Option | Short | Description |
|---|---|---|
--model <MODEL> | -m | AI CLI to use (default: from tinytown.toml) |
--max-rounds <N> | Maximum iterations before stopping (default: 10) | |
--foreground | Run in foreground instead of background | |
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Setting the Default CLI
Edit tinytown.toml to change which AI CLI is used by default:
name = "my-town"
default_cli = "auggie"
Then all tt spawn commands use that CLI:
tt spawn backend # Uses auggie (from config)
tt spawn frontend --model codex # Override to use codex
Built-in Agent CLIs
| CLI | Command (non-interactive) |
|---|---|
claude | claude --print --dangerously-skip-permissions |
auggie | auggie --print |
codex | codex exec --dangerously-bypass-approvals-and-sandbox |
aider | aider --yes --no-auto-commits --message |
gemini | gemini |
copilot | gh copilot |
cursor | cursor |
These are the CLI tools that run AI coding agents, not the underlying models.
Examples
Spawn in Background (Default)
tt spawn worker-1
# Agent runs in background, logs to logs/worker-1.log
Spawn in Foreground (See Output)
tt spawn worker-1 --foreground
# Agent runs in this terminal, you see all output
Limit Iterations
tt spawn worker-1 --max-rounds 5
# Agent stops after 5 rounds (default is 10)
Spawn Multiple Agents (Parallel!)
tt spawn backend &
tt spawn frontend &
tt spawn tester &
# All three run in parallel
Output
π€ Spawned agent 'backend' using model 'auggie'
ID: 550e8400-e29b-41d4-a716-446655440000
π Starting agent loop in background (max 10 rounds)...
Logs: ./logs/backend.log
Agent running in background. Check status with 'tt status'
What Happens
- Agent registered in Redis (
tt:<town>:agent:<id>) - Background process started running
tt agent-loop - Agent loop:
- Checks inbox for messages
- If messages: builds prompt, runs AI model
- Model output logged to
logs/<name>_round_<n>.log - Repeats until
--max-roundsreached
- Agent stops with state
Stopped
Agent Naming
Choose descriptive names:
| Good Names | Why |
|---|---|
backend | Describes the work area |
worker-1 | Simple numbered workers |
reviewer | Describes the role |
alice | Personality names work too |
| Avoid | Why |
|---|---|
agent | Too generic |
a | Not descriptive |
| Spaces | Use hyphens instead |
Agent State After Spawn
New agents start in Starting state, then transition to Idle:
Starting β Idle (ready for work)
Check state with:
tt list
Errors
Town Not Initialized
Error: Town not initialized at . Run 'tt init' first.
Solution: Run tt init or specify --town path.
Agent Already Exists
Agents are tracked by name. Spawning the same name creates a new agent with a new ID.
See Also
- tt init β Initialize a town
- tt assign β Assign tasks to agents
- tt list β List all agents
- Agents Concept
tt restart
Restart a stopped agent with fresh rounds.
Synopsis
tt restart <AGENT> [OPTIONS]
Description
Restarts an agent that is in a terminal state (Stopped or Error). Resets the agentβs state to Idle, clears any stop flags, and spawns a new agent process with fresh rounds.
The agent must already exist and be stopped. To create a new agent, use tt spawn.
Arguments
| Argument | Description |
|---|---|
AGENT | Name of the agent to restart |
Options
| Option | Description |
|---|---|
--rounds <N> | Maximum rounds for restarted agent (default: 10) |
--foreground | Run in foreground instead of backgrounding |
--town <PATH> | Town directory (default: .) |
--verbose | Enable verbose logging |
Examples
Basic Restart
tt restart worker-1
Output:
π Restarting agent 'worker-1'...
Rounds: 10
Log: .tt/logs/worker-1.log
β
Agent 'worker-1' restarted
Restart with More Rounds
tt restart worker-1 --rounds 20
Restart in Foreground
tt restart worker-1 --foreground
# Agent runs in terminal, you see all output
Error: Agent Still Active
tt restart worker-1
Output:
β Agent 'worker-1' is still active (Working)
Use 'tt kill worker-1' to stop it first
Error: Agent Not Found
tt restart nonexistent
Output:
β Agent 'nonexistent' not found
Restart vs Spawn
| Command | Use Case |
|---|---|
tt restart | Revive existing stopped agent (keeps ID) |
tt spawn | Create brand new agent (new ID) |
Common Workflow
After agent exhausts its rounds:
# Check status
tt status
# Shows: worker-1 (Stopped) - completed 10/10 rounds
# Restart with more rounds
tt restart worker-1 --rounds 15
See Also
- tt spawn β Create new agents
- tt kill β Stop agents gracefully
- tt recover β Mark orphaned agents as stopped
tt assign
Assign a task to an agent.
Synopsis
tt assign <AGENT> <TASK>
Description
Creates a new task record and sends it to the specified agentβs inbox as a semantic task message.
tt assign sends a semantic task message and is the right command for actionable work. Use tt send for non-task communication such as queries, informational updates, or confirmations.
Arguments
| Argument | Description |
|---|---|
<AGENT> | Agent name to assign to |
<TASK> | Task description (quoted string) |
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Basic Assignment
tt assign worker-1 "Implement the login API"
Multi-line Task
tt assign backend "Create a REST API endpoint POST /users that:
- Accepts {email, password, name}
- Validates email format
- Hashes password with bcrypt
- Returns {id, email, name, created_at}"
Assign to Multiple Agents
tt assign frontend "Build the login form"
tt assign backend "Build the auth API"
tt assign tester "Write integration tests"
Output
π Assigned task 550e8400-e29b-41d4-a716-446655440000 to agent 'worker-1'
What Happens
- Task created with state
Pending - Task stored in Redis at
tt:<town>:task:<id> - Message sent to agentβs inbox at
tt:<town>:inbox:<agent-id> - Task state updated to
Assigned
Task Lifecycle After Assignment
Pending β Assigned β Running β Completed
βββ Failed
βββ Cancelled
Viewing Assigned Tasks
Check whatβs in an agentβs inbox:
# Using redis-cli (replace <town> with your town name)
redis-cli -s ./redis.sock LLEN tt:<town>:inbox:<agent-id>
redis-cli -s ./redis.sock LRANGE tt:<town>:inbox:<agent-id> 0 -1
Or check status:
tt status
Errors
Agent Not Found
Error: Agent not found: nonexistent
Solution: Spawn the agent first with tt spawn.
Town Not Initialized
Error: Town not initialized at . Run 'tt init' first.
Task Description Tips
Good task descriptions:
- Be specific about what to build
- Include acceptance criteria
- Mention relevant files/paths
- Specify output format if needed
# β
Good
tt assign backend "Create POST /api/users endpoint in src/routes/users.rs.
Accept JSON body {email, password}. Return 201 with {id, email}."
# β Too vague
tt assign backend "Build API"
Use tt assign when the recipient should do concrete work, not just acknowledge or discuss.
See Also
- tt spawn β Create agents
- tt status β Check task status
- Tasks Concept
tt backlog
Manage the global backlog of unassigned tasks.
Synopsis
tt backlog <SUBCOMMAND> [OPTIONS]
Description
Use backlog when work should exist in Tinytown but should not be assigned immediately.
Backlog tasks are stored in Redis, can be tagged, and can be claimed later by the right agent.
Subcommands
Add
tt backlog add "<TASK DESCRIPTION>" [--tags tag1,tag2]
Creates a new task and places it in the global backlog queue.
List
tt backlog list
Shows all backlog task IDs with a short description and tags.
Claim
tt backlog claim <TASK_ID> <AGENT>
Removes a task from backlog, assigns it to <AGENT>, and sends a semantic TaskAssign message to that agent.
Assign All
tt backlog assign-all <AGENT>
Bulk-assigns every backlog task to one agent (useful for manual catch-up or handoff).
Remove
tt backlog remove <TASK_ID>
Removes a task from the backlog without assigning it to any agent. Useful for cleaning up tasks that are no longer needed or were added by mistake. The task record is deleted as part of the removal.
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Park Work in Backlog
tt backlog add "Investigate flaky auth integration test" --tags test,auth,backend
tt backlog add "Document token refresh behavior" --tags docs,api
Review and Claim by Role
# Backend agent role
tt backlog list
tt backlog claim 550e8400-e29b-41d4-a716-446655440000 backend
# Docs agent role
tt backlog claim 550e8400-e29b-41d4-a716-446655440111 docs
Role-Based Claiming Pattern
When agents are idle, have them:
- Run
tt backlog list - Claim one task matching their role/tags
- Work it to completion, then repeat
This keeps specialists busy without over-assigning work up front.
See Also
- tt assign β Directly assign new work
- tt conductor β Orchestrate agents interactively
- Tasks Concept
tt list
List all agents in the town.
Synopsis
tt list [OPTIONS]
Description
Shows all agents registered in the town with their current state.
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
List All Agents
tt list
Output
Agents:
backend (550e8400-e29b-41d4-a716-446655440000) - Working
frontend (6ba7b810-9dad-11d1-80b4-00c04fd430c8) - Idle
reviewer (6ba7b811-9dad-11d1-80b4-00c04fd430c9) - Idle
No Agents
No agents. Run 'tt spawn <name>' to create one.
Agent States
| State | Meaning |
|---|---|
Starting | Agent is initializing |
Idle | Ready for work |
Working | Executing a task |
Paused | Temporarily stopped |
Stopped | Terminated |
Error | Something went wrong |
See Also
- tt status β Detailed status including pending messages
- tt spawn β Create new agents
- Agents Concept
tt status
Show town status.
Synopsis
tt status [OPTIONS]
Description
Displays comprehensive status of the town including:
- Town name and location
- Redis connection info
- All agents with their states and pending messages
- Message type breakdown for pending inbox items (tasks, queries, informational, confirmations)
- With
--deep: Recent activity from each agent
Options
| Option | Short | Description |
|---|---|---|
--deep | Show recent agent activity (stored in Redis) | |
--tasks | Show detailed task breakdown by state and agent | |
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Basic Status
tt status
Output:
ποΈ Town: my-project
π Root: /Users/you/projects/my-project
π‘ Redis: unix:///Users/you/projects/my-project/redis.sock
π€ Agents: 3
backend (Working) - 0 messages pending
frontend (Idle) - 2 messages pending (tasks: 1, queries: 1, informational: 0, confirmations: 0)
reviewer (Idle) - 1 messages pending (tasks: 0, queries: 0, informational: 1, confirmations: 0)
Deep Status (with stats and activity)
tt status --deep
Output:
ποΈ Town: my-project
π Root: /Users/you/projects/my-project
π‘ Redis: unix:///Users/you/projects/my-project/redis.sock
π€ Agents: 3
backend (Working) - 0 pending, 12 rounds, uptime 1h 23m
ββ Round 12: β
completed
ββ Round 11: β
completed
frontend (Idle) - 2 pending (tasks: 1, queries: 1, informational: 0, confirmations: 0), 5 rounds, uptime 45m 12s
ββ Round 5: β
completed
reviewer (Idle) - 1 pending (tasks: 0, queries: 0, informational: 1, confirmations: 0), 2 rounds, uptime 30m 5s
ββ Round 2: β οΈ model error
π Stats: rounds completed, uptime since spawn
Task Status (detailed task tracking)
tt status --tasks
Output:
ποΈ Town: my-project
π Root: /Users/you/projects/my-project
π‘ Redis: unix:///Users/you/projects/my-project/redis.sock
π€ Agents: 2
backend (Working) - 0 messages pending
reviewer (Idle) - 1 messages pending
π Tasks: 8 total (2 pending, 3 in-flight, 3 done)
π Task Breakdown by State:
β³ Pending: 1
π Assigned: 1
π Running: 2
β
Completed: 3
β Failed: 0
π« Cancelled: 1
π Backlog: 2
π Tasks by Agent:
backend (2 active, 2 done):
π abc123 Implement user authentication...
π def456 Add rate limiting to API...
β
ghi789 Setup database migrations...
β
jkl012 Create user model...
reviewer (1 active, 1 done):
π mno345 Review auth implementation...
β
pqr678 Review database schema...
(unassigned) (2 tasks):
β³ stu901 Write integration tests...
Stats Shown
| Stat | Description |
|---|---|
| Rounds | Number of agent loop iterations completed |
| Uptime | Time since agent was spawned |
| Pending | Messages waiting in inbox |
| Message Types | Pending breakdown: tasks, queries, informational, confirmations |
| Activity | Recent round results (last 5) |
| Task States | With --tasks: Pending, Assigned, Running, Completed, Failed, Cancelled, Backlog |
| Tasks by Agent | With --tasks: Tasks grouped by assigned agent with state icons |
Output Fields
| Field | Description |
|---|---|
| Town | Name from tinytown.toml |
| Root | Absolute path to town directory |
| Redis | Connection URL (socket or TCP) |
| Agents | Count and details |
Agent Details
For each agent:
- Name β Human-readable identifier
- State β Current lifecycle state
- Messages β Number of pending inbox messages
- Type Breakdown β Pending messages grouped as tasks, queries, informational, confirmations
Interpreting Status
| Situation | Meaning | Action |
|---|---|---|
Agent Idle + 0 messages | Ready for work | Assign a task |
Agent Idle + N messages | Messages waiting | Agent should process |
Agent Working | Busy with task | Wait or check progress |
Agent Error | Something failed | Check logs, respawn |
Related Commands
| Command | When to Use |
|---|---|
tt status | Overview of everything |
tt list | Just agent names and states |
Direct Redis Inspection
For more detail:
# Connect to Redis
redis-cli -s ./redis.sock
# List all keys for your town
KEYS tt:<town_name>:*
# Check specific inbox
LLEN tt:<town_name>:inbox:550e8400-e29b-41d4-a716-446655440000
# View agent state
GET tt:<town_name>:agent:550e8400-e29b-41d4-a716-446655440000
See Also
- tt list β Simple agent list
- tt start β Keep town running
- Towns Concept
tt tasks (Deprecated)
Note: The
tt taskscommand has been deprecated and replaced bytt inbox --all. Please usett inbox --allinstead.
Migration
The functionality of tt tasks is now available via:
tt inbox --all
See tt inbox for full documentation.
See Also
tt task
Manage individual tasks.
Synopsis
tt task <SUBCOMMAND> [OPTIONS]
Description
Provides operations for managing specific tasks by ID: completing, viewing details, or listing tasks.
Subcommands
Complete
Mark a task as completed:
tt task complete <TASK_ID> [--result <MESSAGE>]
Show
View details of a specific task:
tt task show <TASK_ID>
List
List all tasks with optional filtering:
tt task list [--state <STATE>]
States: pending, assigned, running, completed, failed, cancelled
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Mark a Task Complete
tt task complete 550e8400-e29b-41d4-a716-446655440000 --result "Fixed the bug"
Output:
β
Task 550e8400-e29b-41d4-a716-446655440000 marked as completed
Description: Fix authentication bug
Result: Fixed the bug
View Task Details
tt task show 550e8400-e29b-41d4-a716-446655440000
Output:
π Task: 550e8400-e29b-41d4-a716-446655440000
Description: Fix authentication bug
State: Running
Assigned to: backend-1
Created: 2025-03-09T12:00:00Z
Updated: 2025-03-09T12:05:00Z
Started: 2025-03-09T12:01:00Z
Tags: backend, auth
List Running Tasks
tt task list --state running
Output:
π Tasks (2):
π 550e8400-... - Fix authentication bug [backend-1]
π 660e9500-... - Update API endpoints [backend-2]
List All Tasks
tt task list
State icons:
- β³ Pending
- π Assigned
- π Running
- β Completed
- β Failed
- π« Cancelled
See Also
- tt tasks β Overview of pending tasks
- tt assign β Assign new tasks
- tt backlog β Manage unassigned tasks
tt inbox
Check agent inbox(es).
Synopsis
tt inbox <AGENT> # Check specific agent
tt inbox --all # Show all agents' inboxes
Description
Shows pending messages in agent inboxes.
When used with --all, displays a summary of pending messages for all agents, categorized by type:
- [T] Tasks requiring action
- [Q] Queries awaiting response
- [I] Informational messages (FYI)
- [C] Confirmations/acknowledgments
Messages are added by:
tt assignβ Creates a task and sends TaskAssign messagett sendβ Sends a custom message
Arguments
| Argument | Description |
|---|---|
[AGENT] | Agent name to check (optional with --all) |
Options
| Option | Short | Description |
|---|---|---|
--all | -a | Show pending messages for all agents |
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Check Specific Agent
tt inbox backend
Output:
π¬ Inbox for 'backend': 3 messages
View All Agentsβ Inboxes
tt inbox --all
Output:
π Pending Messages by Agent:
backend (Working):
[T] 2 tasks requiring action
[Q] 1 queries awaiting response
[I] 3 informational
[C] 0 confirmations
β’ Fix authentication bug in login endpoint
β’ Update database schema for new fields
reviewer (Idle):
[T] 1 tasks requiring action
[Q] 0 queries awaiting response
[I] 0 informational
[C] 2 confirmations
β’ Review PR #42: Add user validation
Total: 4 actionable message(s)
Comparison with tt status
tt statusshows agent states and total inbox countstt inbox --allshows message breakdown and previews
See Also
- tt send β Send messages to agents
- tt assign β Assign tasks (sends TaskAssign message)
- tt task β Manage individual tasks
- Messages Concept
tt send
Send a message to an agent.
Synopsis
tt send <TO> <MESSAGE> [OPTIONS]
Description
Sends a semantic message to an agentβs inbox. The agent will receive it on their next inbox check.
Message semantics:
- Default (no semantic flag): Task-style/actionable message
--query: Question that expects a response or decision--info: Informational update (context only)--ack: Confirmation/receipt message
With --urgent: Message goes to priority inbox, processed before regular messages!
Use this for:
- Agent-to-agent communication
- Conductor instructions
- Custom coordination
- Urgent: Interrupt agents with priority messages
Arguments
| Argument | Description |
|---|---|
<TO> | Target agent name |
<MESSAGE> | Message content |
Options
| Option | Description |
|---|---|
--query | Mark message as a query (query semantic type) |
--info | Mark message as informational (info semantic type) |
--ack | Mark message as confirmation (ack semantic type) |
--urgent | Send as urgent (processed first at start of next round) |
Examples
Send a Regular Message
tt send backend "The API spec is ready in docs/api.md"
Output:
π€ Sent task message to 'backend'
Send a Query
tt send backend --query "Can you take auth token refresh next?"
Send Informational Context
tt send reviewer --info "CI is green on commit a1b2c3d"
Send a Confirmation
tt send conductor --ack "Received. I'll start after current task."
Send an URGENT Message
tt send backend --urgent "STOP! Security vulnerability found. Do not merge."
Output:
π¨ Sent URGENT task message to 'backend'
The agent will see this at the start of their next round, before processing regular inbox.
Coordination Between Agents
# Developer finishes, notifies reviewer
tt send reviewer "Implementation complete. Please review src/auth.rs"
# Critical bug found - urgent interrupt
tt send developer --urgent "Critical: SQL injection in login. Fix immediately."
How It Works
Regular Messages
- Goes to
tt:<town>:inbox:<id>(Redis list) - Processed in order with other messages
- Agent sees it when they check inbox
- Semantic type is attached as
task,query,info, orack
Urgent Messages
- Goes to
tt:<town>:urgent:<id>(separate priority queue) - Agent checks urgent queue FIRST at start of each round
- Urgent messages injected into agentβs prompt with π¨ marker
- Processed before regular inbox
- Keeps its semantic type (
task,query,info, orack)
See Also
- tt inbox β Check agentβs inbox
- tt assign β Assign tasks (more structured)
- Coordination
tt kill
Stop an agent gracefully.
Synopsis
tt kill <AGENT>
Description
Requests an agent to stop gracefully. The agent will:
- Finish its current model run (if any)
- Check the stop flag at the start of next round
- Exit cleanly with state
Stopped
This is a graceful stop, not an immediate kill. The agent completes its current work before stopping.
Arguments
| Argument | Description |
|---|---|
<AGENT> | Agent name to stop |
Examples
Stop a Single Agent
tt kill backend
Output:
π Requested stop for agent 'backend'
Agent will stop at the start of its next round.
Stop All Agents (Cleanup)
tt kill backend
tt kill frontend
tt kill reviewer
Check That Agent Stopped
tt status
Output:
π€ Agents: 3
backend (Stopped) - 0 messages pending
frontend (Idle) - 0 messages pending
reviewer (Working) - 1 messages pending
How It Works
- Sets a stop flag in Redis:
tt:stop:<agent-id> - Agent checks this flag at start of each round
- If flag is set, agent exits loop gracefully
- Flag has 1-hour TTL (auto-cleanup if agent already dead)
When to Use
- Work complete: All tasks finished, clean up agents
- Stuck agent: Agent not making progress, stop and respawn
- Resource cleanup: Free up system resources
- Reconfigure: Stop agent to change model or settings
See Also
- tt spawn β Start new agents
- tt status β Check agent states
- Coordination
tt prune
Remove stopped or stale agents from Redis.
Synopsis
tt prune [OPTIONS]
Description
Removes agents that are in a terminal state (Stopped or Error) from Redis. This cleans up agent records after theyβve finished or failed. Useful for managing long-running towns where many agents have come and gone.
Options
| Option | Short | Description |
|---|---|---|
--all | Remove ALL agents, not just stopped ones | |
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Prune Only Stopped Agents
tt prune
Output:
ποΈ Removed worker-1 (abc123...) - Stopped
ποΈ Removed worker-2 (def456...) - Error
β¨ Pruned 2 agent(s)
Remove All Agents
tt prune --all
β οΈ Warning: This removes ALL agents including active ones.
Common Workflow
After recovering orphaned agents, prune them:
# First, recover any orphaned agents (marks them as stopped)
tt recover
# Then remove stopped agents from Redis
tt prune
See Also
- tt recover β Detect and clean up orphaned agents
- tt reset β Full town reset
- tt kill β Stop a specific agent
tt recover
Detect and clean up orphaned agents.
Synopsis
tt recover [OPTIONS]
Description
Scans for agents that appear to be running (Working/Starting state) but whose processes have actually crashed or been killed. Marks these orphaned agents as Stopped so they can be pruned or restarted.
An agent is considered orphaned if:
- Itβs in Working or Starting state
- Its log file hasnβt been modified in 2+ minutes, OR
- Its last heartbeat was 2+ minutes ago
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Scan for Orphaned Agents
tt recover
Output when orphans found:
π Scanning for orphaned agents...
π Recovered 'worker-1' (Working) - last heartbeat 5m ago
π Recovered 'worker-2' (Working) - last heartbeat 3m ago
β¨ Recovered 2 orphaned agent(s) (4 total checked)
Run 'tt prune' to remove them from Redis
Output when no orphans:
π Scanning for orphaned agents...
β¨ No orphaned agents found (3 agents checked)
Common Workflow
After a crash or system restart:
# 1. Recover orphaned agents (marks them stopped)
tt recover
# 2. Optional: Reclaim tasks from dead agents
tt reclaim --to-backlog
# 3. Clean up stopped agents
tt prune
# 4. Restart needed agents
tt restart worker-1
When to Use
- After system restart or crash
- When agents appear βstuckβ in Working state
- Before reclaiming tasks from dead agents
See Also
- tt prune β Remove stopped agents
- tt reclaim β Recover orphaned tasks
- tt restart β Restart stopped agents
- Error Handling & Recovery β Recovery tutorial
tt reclaim
Recover orphaned tasks from dead agents.
Synopsis
tt reclaim [OPTIONS]
Description
Finds tasks assigned to agents in terminal states (Stopped/Error) and moves them elsewhere. This prevents work from being lost when agents crash.
Without destination flags, lists orphaned tasks. With flags, moves them.
Options
| Option | Description |
|---|---|
--to-backlog | Move orphaned tasks to the global backlog |
--to <AGENT> | Move orphaned tasks to a specific agent |
--from <AGENT> | Reclaim only from a specific dead agent |
--town <PATH> | Town directory (default: .) |
--verbose | Enable verbose logging |
Examples
Preview Orphaned Tasks
tt reclaim
Output:
π Reclaiming orphaned tasks...
worker-1 (Stopped): 3 message(s)
task: 550e8400-e29b-41d4-a716-446655440000
task: Fix authentication bug in login endpoint
task: Update database schema
π Found 3 orphaned task(s)
Use --to-backlog or --to <agent> to reclaim them
Move Tasks to Backlog
tt reclaim --to-backlog
Output:
π Reclaiming orphaned tasks...
worker-1 (Stopped): 3 message(s)
β backlog: 550e8400-e29b-41d4-a716-446655440000
β backlog: 660e9500-e29b-41d4-a716-446655440111
β backlog: 770f0600-e29b-41d4-a716-446655440222
β
Moved 3 task(s) to backlog
Reassign to Another Agent
tt reclaim --to worker-2
Output:
π Reclaiming orphaned tasks...
worker-1 (Stopped): 3 message(s)
β worker-2: 550e8400-e29b-41d4-a716-446655440000
β worker-2: Fix authentication bug in login endpoint
β worker-2: Update database schema
β
Moved 3 task(s) to 'worker-2'
Reclaim from Specific Agent
tt reclaim --from worker-1 --to-backlog
Common Workflow
After a crash:
# 1. Recover orphaned agents first
tt recover
# 2. Reclaim their tasks to backlog
tt reclaim --to-backlog
# 3. Clean up stopped agents
tt prune
# 4. Restart or spawn new agents
tt restart worker-1
# or
tt spawn worker-3
# 5. Let agents claim from backlog
See Also
- tt recover β Mark orphaned agents as stopped
- tt prune β Remove stopped agents
- tt backlog β Manage the backlog
- Error Handling & Recovery β Recovery tutorial
tt reset
Reset all town state, clearing agents, tasks, and messages from Redis.
Synopsis
tt reset [OPTIONS]
Description
Performs a complete reset of the townβs Redis state. This is useful when you want to start fresh without reinitializing the entire town, or when cleaning up after a failed run.
β οΈ Warning: This operation cannot be undone. All agents, tasks, and messages will be permanently deleted.
Options
| Option | Description |
|---|---|
--force | Skip the confirmation prompt and proceed immediately |
--agents-only | Only reset agent-related state (agents and inboxes), preserving tasks and backlog |
--town <PATH> | Town directory (default: .) |
--verbose | Enable verbose logging |
Examples
Full Reset (with confirmation)
tt reset
Output:
ποΈ Resetting town 'my-project'
This will delete:
- 3 agent(s)
- 12 task(s)
- 2 backlog item(s)
β οΈ This action cannot be undone!
Run with --force to confirm: tt reset --force
Full Reset (immediate)
tt reset --force
Output:
ποΈ Resetting town 'my-project'
This will delete:
- 3 agent(s)
- 12 task(s)
- 2 backlog item(s)
β
Reset complete: deleted 47 Redis keys
Run 'tt spawn <name>' to create new agents
Agents-Only Reset
Reset just the agents while preserving tasks and backlog:
tt reset --agents-only --force
Output:
ποΈ Resetting agents in town 'my-project'
This will delete:
- 3 agent(s) and their inboxes
Tasks and backlog will be preserved.
β
Reset complete: deleted 12 Redis keys (agents only)
Run 'tt spawn <name>' to create new agents
Use Cases
| Scenario | Command |
|---|---|
| Start completely fresh | tt reset --force |
| Replace agents but keep tasks | tt reset --agents-only --force |
| Preview what will be deleted | tt reset (no βforce) |
What Gets Deleted
Full Reset (tt reset --force)
- All registered agents (
tt:<town>:agent:*) - All agent inboxes (
tt:<town>:inbox:*) - All tasks (
tt:<town>:task:*) - All backlog items (
tt:<town>:backlog) - Agent activity logs
Agents-Only Reset (tt reset --agents-only --force)
- All registered agents (
tt:<town>:agent:*) - All agent inboxes (
tt:<town>:inbox:*) - Agent activity logs
Tasks and backlog are preserved.
Recovery
If you accidentally reset:
- If you previously ran
tt save, you may be able tott restorefrom the AOF file - If tasks were synced to
tasks.toml, you cantt sync pushto recreate them
See Also
- tt init β Initialize a new town
- tt spawn β Create agents after reset
- tt save β Save state before reset
- tt restore β Restore from saved state
tt conductor
Start the conductor - an AI agent that orchestrates your town.
Synopsis
tt conductor [OPTIONS]
Description
The conductor is an AI agent (using your default model) that coordinates your Tinytown! π
Like the train conductor guiding the miniature train through Tiny Town, Colorado, it:
- Understands what you want to build
- Breaks down work into tasks
- Spawns appropriate agents
- Assigns tasks to agents
- Keeps unassigned work in backlog
- Monitors progress
- Helps resolve blockers
The conductor knows how to use the tt CLI to orchestrate your project.
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
How It Works
-
Context Injection: The conductor receives context about:
- Current town state (agents, tasks, pending messages)
- Whether this is a fresh start or resuming an existing session
- Available
ttcommands and suggested team roles - Its role as orchestrator
-
AI Model Launch: Your default model (claude, auggie, etc.) starts with this context
-
Natural Conversation: You describe what you want, the AI orchestrates
Fresh Start vs Resuming
The conductor behaves differently based on existing state:
Fresh Start (No Agents)
When starting with a new town, the conductor:
- Asks what youβre trying to build
- Offers to analyze the project (README, design docs, codebase)
- Suggests appropriate team roles for your project
- Helps break down your idea into tasks and agent assignments
Resuming (Existing Agents)
When agents already exist, the conductor:
- Shows current agent status
- Checks progress with
tt status --deep - Continues coordinating from where you left off
Suggested Team Roles
The conductor knows about common team roles and when to suggest them:
| Role | When to Suggest |
|---|---|
backend | API development, server-side logic |
frontend | UI/UX implementation |
tester | Writing and running tests |
reviewer | Always include - quality gate |
devops | CI/CD, deployment, infrastructure |
security | Security review, vulnerability analysis |
docs | Documentation, API specs, README |
architect | System design, code structure |
The conductor will analyze your project and suggest roles that make sense.
Example Session
$ tt conductor
π Starting conductor with claude model...
Context: ./.conductor_context.md
Running: claude --print
# Tinytown Conductor
You are the **conductor** of Tinytown "my-project"...
[context displayed]
---
User: Build a user authentication system with login, signup, and password reset.
Conductor: I'll set up a team for this. Let me spawn some agents and create a plan.
[Conductor runs: tt spawn architect]
[Conductor runs: tt spawn backend]
[Conductor runs: tt spawn tester]
I've created three agents. Now let me assign the initial work:
[Conductor runs: tt assign architect "Design REST API for user authentication..."]
The architect is working on the API design. Once complete, I'll assign implementation to the backend agent and tests to the tester.
[Conductor runs: tt status]
Current status:
- architect (Working) - designing the API
- backend (Idle) - waiting for design
- tester (Idle) - will write tests after implementation
The Reviewer Pattern
The conductor always spawns a reviewer agent. This creates a simple completion protocol:
Worker completes task
β
Conductor assigns review task to reviewer
β
Reviewer checks work β approves or requests changes
β
Conductor marks complete or assigns fixes
This keeps it simple:
- Workers do the work
- Reviewer decides if itβs done
- Conductor coordinates everything
Backlog Pattern
Use backlog for work that should exist but should not be assigned yet:
tt backlog add "Task needing ownership decision" --tags backend,auth
tt backlog list
tt backlog claim <task_id> <agent>
A practical approach:
- Conductor adds uncertain work to backlog
- Idle agents review backlog
- Agents claim role-matching tasks
The Conductorβs Context
The conductor receives a markdown context file that includes:
# Tinytown Conductor
You are the **conductor** of Tinytown "my-project"...
## Current Town State
- Agents: backend (Working), reviewer (Idle)
- Tasks pending: 1
## Your Capabilities
- tt spawn <name> - Create agents
- tt assign <agent> "task" - Assign work
- tt backlog list - Review unassigned tasks
- tt backlog claim <task_id> <agent> - Claim backlog task
- tt task complete <task_id> --result "summary" - Mark task done
- tt status - Check progress
## The Reviewer Pattern
Always spawn a reviewer. They decide when work is done.
## Your Role
1. Break down user requests into tasks
2. Spawn workers + reviewer
3. Assign work, then assign review
4. Coordinate until reviewer approves
5. Save state with `tt sync pull`, suggest git commit
Comparison with gt mayor attach
| Gastown | Tinytown |
|---|---|
gt mayor attach | tt conductor |
| Natural language | Natural language β |
| Mayor is complex orchestrator | Conductor is simple AI + CLI |
| Hard to understand what Mayor does | You can read the context |
| Recovery daemons, convoys, beads | Just tt commands |
The conductor is transparent: you can see exactly what context it has and what commands it runs.
See Also
tt plan
Plan tasks in a file before starting work.
Synopsis
tt plan [OPTIONS]
tt plan --init
Description
The plan command lets you define tasks in tasks.toml before syncing them to Redis. This enables:
- Version control β Check in your task plan with your code
- Offline planning β Edit tasks without Redis running
- Review before execution β Plan the work, then start the train
Options
| Option | Short | Description |
|---|---|---|
--init | -i | Create a new tasks.toml file |
--town <PATH> | -t | Town directory (default: .) |
Examples
Initialize a Task Plan
tt plan --init
Creates tasks.toml:
[meta]
description = "Task plan for this project"
[[tasks]]
id = "example-1"
description = "Example task - replace with your own"
status = "pending"
tags = ["example"]
View Current Plan
tt plan
Output:
π Tasks in plan (./tasks.toml):
β³ [unassigned] example-1 - Example task - replace with your own
Edit and Sync
# Edit tasks.toml with your editor
vim tasks.toml
# Push to Redis
tt sync push
Task File Format
[meta]
description = "Sprint 1 tasks"
default_agent = "developer" # Optional default
[[tasks]]
id = "auth-api"
description = "Build user authentication API"
agent = "backend"
status = "pending"
tags = ["auth", "api"]
[[tasks]]
id = "auth-tests"
description = "Write auth API tests"
agent = "tester"
status = "pending"
parent = "auth-api" # Optional parent task
[[tasks]]
id = "auth-review"
description = "Review auth implementation"
status = "pending"
# No agent = unassigned
Task Status Values
| Status | Icon | Meaning |
|---|---|---|
pending | β³ | Not started |
assigned | π | Given to an agent |
running | π | In progress |
completed | β | Done |
failed | β | Error |
Workflow
- Plan:
tt plan --initβ edittasks.toml - Review:
tt planto see the plan - Start:
tt sync pushto send to Redis - Execute: Agents receive tasks
- Snapshot:
tt sync pullto save state
Why Plan in a File?
- Git history β Track how the plan evolved
- Code review β Review task definitions in PRs
- Templates β Reuse task structures across projects
- Offline β Plan without starting Redis
Redis remains the source of truth at runtime; the file is for planning and version control.
See Also
- tt sync β Sync tasks.toml β Redis
- tt conductor β Interactive mode
- Tasks Concept
tt sync
Synchronize tasks between tasks.toml and Redis.
Synopsis
tt sync [push|pull]
Description
The sync command moves task data between the tasks.toml file and Redis:
- push: File β Redis (deploy your plan)
- pull: Redis β File (snapshot current state)
Arguments
| Argument | Description |
|---|---|
push | Send tasks from tasks.toml to Redis (default) |
pull | Save Redis tasks to tasks.toml |
Examples
Push Plan to Redis
After editing tasks.toml:
tt sync push
Output:
β¬οΈ Pushed 5 tasks from tasks.toml to Redis
Pull State from Redis
Save current Redis state to file:
tt sync pull
Output:
β¬οΈ Pulled 5 tasks from Redis to tasks.toml
Workflow
βββββββββββββββββββ push βββββββββββββββββββ
β tasks.toml β ββββββββββββΊ β Redis β
β (planning) β β (execution) β
β β ββββββββββββ β β
βββββββββββββββββββ pull βββββββββββββββββββ
β β
β β
βΌ βΌ
Git tracked Fast queries
Human readable Agent access
Offline edits Real-time state
When to Use
Push (file β Redis)
- After editing
tasks.toml - At the start of a work session
- To reset task state
Pull (Redis β file)
- Before committing to git
- To snapshot progress
- To share state with team
Data Flow
Push Behavior
- Reads all tasks from
tasks.toml - Creates corresponding Task objects
- Stores each in Redis at
tt:<town>:task:<id> - Tags include
plan:<id>for tracking
Pull Behavior
- (Currently) Initializes empty tasks.toml if missing
- (Future) Scans Redis for
tt:<town>:task:*keys - Converts to TaskEntry format
- Writes to tasks.toml
See Also
- tt plan β Create and view task plans
- Tasks Concept
- Redis Configuration
tt save
Save Redis state to AOF file for version control.
Synopsis
tt save
Description
Triggers Redis to compact and save its state to an AOF (Append Only File). This file can then be version controlled with git.
How It Works
- Sends
BGREWRITEAOFcommand to Redis - Redis compacts all operations into a single AOF file
- File is saved to
.tt/redis.aof(configurable intinytown.toml)
Example
tt save
Output:
πΎ Saving Redis state...
AOF rewrite triggered. File: ./.tt/redis.aof
To version control Redis state:
git add .tt/redis.aof
git commit -m 'Save town state'
Version Control Workflow
# Work on your project
tt spawn backend
tt assign backend "Build the API"
# ... agents work ...
# Save state before committing code
tt save
git add .tt/redis.aof tasks.toml
git commit -m "API implementation complete"
# Later, restore on another machine
git pull
tt restore # See instructions
AOF File Contents
The AOF file contains Redis commands to recreate state:
- All agent registrations
- All task states
- All messages in inboxes
- All activity logs
Config Options
In tinytown.toml:
[redis]
persist = true
aof_path = ".tt/redis.aof"
See Also
- tt restore β Restore state from AOF
- tt sync β Sync tasks.toml with Redis
- Redis Configuration
tt restore
Restore Redis state from AOF file.
Synopsis
tt restore
Description
Shows how to restore Redis state from a saved AOF file. This is useful when:
- Starting on a new machine
- Recovering from a crash
- Continuing work from a git checkout
Example
tt restore
Output:
π AOF file found: ./redis.aof
To restore from AOF:
1. Stop Redis if running
2. Start Redis with: redis-server --appendonly yes --appendfilename redis.aof
3. Redis will replay the AOF and restore state
Or just run 'tt init' - it will use existing AOF if present.
Restore Workflow
Option 1: Manual Restore
# Stop any running Redis
pkill redis-server
# Start Redis with AOF enabled
redis-server --appendonly yes --dir . --appendfilename redis.aof --port 0 --unixsocket redis.sock &
# Redis replays AOF and restores state
tt status
Option 2: Fresh Init (Recommended)
If redis.aof exists in the town directory, tt init will automatically
configure Redis to use it:
cd my-project
tt init # Detects existing redis.aof
tt status # State is restored!
What Gets Restored
- β Agent registrations (names, states, models)
- β Task states (pending, completed, etc.)
- β Message queues (inbox contents)
- β Activity logs (recent history)
- β Stop flags, urgent queues, etc.
See Also
- tt save β Save state to AOF
- Redis Configuration
tt start
Keep a connection open to the current town.
Synopsis
tt start [OPTIONS]
Description
Connects to an existing town and keeps the process running until Ctrl+C. This is useful for:
- Keeping an active town session open during development
- Maintaining a persistent connection for debugging
- Watching a town without spawning a new agent
Note: Towns automatically connect to central Redis when you run tt init or any command that needs it. This command is mainly for explicitly keeping the town connection open.
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Keep Town Running
tt start
Output:
π Town connection open
^C
π Closing town connection...
With Specific Town
tt start --town ~/git/my-project
When to Use
Most operations donβt require tt start because:
tt initprovisions the town and connects as neededtt spawnconnects and stays alivett statusconnects temporarily
Use tt start when you want to:
- Keep a town session open without spawning agents
- Debug connection issues
- Manually control the town lifecycle
See Also
tt stop
Request all agents in the current town to stop gracefully.
Synopsis
tt stop [OPTIONS]
Description
tt stop requests all agents in the current town to stop gracefully.
It does not shut down the shared central Redis instance, because that instance may be serving other towns.
Note: To fully reset a town, use tt reset instead.
Options
| Option | Short | Description |
|---|---|---|
--town <PATH> | -t | Town directory (default: .) |
--verbose | -v | Enable verbose logging |
Examples
Stop the Town
tt stop
Output:
π Requested graceful stop for 3 agent(s) in town 'my-project'
Agents will stop at the start of their next round.
Central Redis remains available to other towns.
Related Operations
| Task | Command |
|---|---|
| Stop one agent | tt kill <agent> |
| Reset all state | tt reset |
| Inspect remaining work | tt inbox --all |
Shared Redis Lifecycle
Central Redis in Tinytown is shared across towns:
- Starts when a town first needs it
- Remains available after
tt stop - Should only be shut down through explicit Redis administration, not normal town cleanup
For persistent deployments, consider running Redis independently.
See Also
tt config
View or set global configuration.
Synopsis
tt config [KEY] [VALUE]
Description
Manages the global Tinytown configuration stored in ~/.tt/config.toml. This configuration applies to all towns unless overridden.
Arguments
| Argument | Description |
|---|---|
KEY | Config key to get or set (e.g., default_cli) |
VALUE | Value to set (if omitted, shows current value) |
Options
| Option | Short | Description |
|---|---|---|
--verbose | -v | Enable verbose logging |
Available Keys
| Key | Description | Values |
|---|---|---|
default_cli | Default AI CLI for agents | claude, auggie, codex, aider, gemini, copilot, cursor |
agent_clis.<name> | Custom CLI command for a named CLI | Any command string |
Examples
View All Configuration
tt config
Output:
βοΈ Global config: /Users/me/.tt/config.toml
default_cli = "claude"
[agent_clis]
my-custom = "custom-ai --mode agent"
Available CLIs: claude, auggie, codex, aider, gemini, copilot, cursor
Get a Specific Value
tt config default_cli
Output:
claude
Set Default CLI
tt config default_cli auggie
Output:
β
Set default_cli = "auggie"
Saved to: /Users/me/.tt/config.toml
Add Custom CLI
tt config agent_clis.my-ai "my-ai-cli --flag"
Configuration Precedence
- CLI argument (
--cli) - Town config (
tinytown.toml) - Global config (
~/.tt/config.toml) - Built-in default (
claude)
Config File Format
~/.tt/config.toml:
default_cli = "claude"
[agent_clis]
my-custom = "custom-ai --mode agent"
See Also
- Custom Models β Adding custom AI CLIs
- tt init β Town-level configuration
tt towns
List all registered towns.
Synopsis
tt towns [OPTIONS]
Description
Displays all towns registered in ~/.tt/towns.toml. Towns are automatically registered when initialized with tt init. For each town, shows:
- Town name
- Connection status
- Agent count (if online)
- Path on disk
Options
| Option | Short | Description |
|---|---|---|
--verbose | -v | Enable verbose logging |
Examples
List Registered Towns
tt towns
Output:
ποΈ Registered Towns (3):
my-project - [OK] 3 agents (2 active)
π /Users/me/git/my-project
feature-branch - [OK] 1 agents (1 active)
π /Users/me/git/my-project
old-project - [OFFLINE]
π /Users/me/git/old-project
Status Indicators
| Status | Meaning |
|---|---|
[OK] N agents (M active) | Town online with Redis connection |
[OFFLINE] | Town exists but Redis not running |
β οΈ (no config) | Path exists but no tinytown.toml |
β (path not found) | Directory no longer exists |
No Towns Registered
tt towns
Output:
π No towns registered yet.
Run 'tt init' in a directory to register a town.
Registration
Towns are automatically registered when created:
cd ~/git/my-new-project
tt init --name my-new-project
# Town is now registered in ~/.tt/towns.toml
Registry File
Towns are tracked in ~/.tt/towns.toml:
[[towns]]
name = "my-project"
path = "/Users/me/git/my-project"
[[towns]]
name = "feature-branch"
path = "/Users/me/git/my-project"
See Also
tt auth
Authentication management for townhall.
Synopsis
tt auth <SUBCOMMAND>
Description
Manages authentication credentials for the townhall REST API and MCP servers.
Subcommands
gen-key
Generate a new API key and its hash:
tt auth gen-key
Examples
Generate API Key
tt auth gen-key
Output:
π Generated new API key
API Key (store securely, shown only once):
tt_abc123def456...
API Key Hash (add to tinytown.toml):
$argon2id$v=19$m=19456,t=2,p=1$...
Add to your tinytown.toml:
[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$..."
Then use the API key with townhall:
curl -H 'Authorization: Bearer tt_abc12...' http://localhost:8080/v1/status
Configuration
After generating a key, add to tinytown.toml:
[townhall]
bind = "127.0.0.1"
rest_port = 8787
[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$m=19456,t=2,p=1$..."
Using the API Key
With curl
curl -H "Authorization: Bearer tt_abc123..." http://localhost:8787/v1/status
In scripts
export TINYTOWN_API_KEY="tt_abc123..."
curl -H "Authorization: Bearer $TINYTOWN_API_KEY" http://localhost:8787/v1/agents
Security Best Practices
- Never commit API keys β Add to
.envor secrets manager - Use environment variables β Donβt hardcode in scripts
- Rotate keys periodically β Generate new keys with
tt auth gen-key - Consider OIDC β For production, use OIDC authentication
See Also
- Authentication & Authorization β Full auth guide
- Townhall Control Plane β REST API reference
tt migrate
Migrate Redis keys from old format to town-isolated format.
Synopsis
tt migrate [OPTIONS]
Description
The migrate command handles backward compatibility when upgrading to a version of Tinytown that uses town-isolated Redis keys. Older versions stored keys in the format tt:type:id, while newer versions use tt:<town_name>:type:id to support multiple towns sharing the same Redis instance.
This migration is:
- Safe: Preview with
--dry-runbefore committing - Idempotent: Running multiple times has no effect after initial migration
- Atomic: Each key is renamed atomically
Options
| Option | Description |
|---|---|
--dry-run | Preview migration without making changes |
--force | Skip confirmation prompt |
Key Formats
Old Format (pre-isolation)
tt:agent:<uuid>
tt:inbox:<uuid>
tt:urgent:<uuid>
tt:task:<uuid>
tt:activity:<uuid>
tt:stop:<uuid>
tt:backlog
New Format (town-isolated)
tt:<town_name>:agent:<uuid>
tt:<town_name>:inbox:<uuid>
tt:<town_name>:urgent:<uuid>
tt:<town_name>:task:<uuid>
tt:<town_name>:activity:<uuid>
tt:<town_name>:stop:<uuid>
tt:<town_name>:backlog
Examples
Preview Migration (Dry Run)
tt migrate --dry-run
Output:
π Migration Preview (dry run)
Town: my-project
Keys to migrate:
tt:agent:abc123 β tt:my-project:agent:abc123
tt:inbox:abc123 β tt:my-project:inbox:abc123
tt:task:def456 β tt:my-project:task:def456
Total: 3 key(s) would be migrated
Run 'tt migrate' (without --dry-run) to perform migration.
Perform Migration (Interactive)
tt migrate
Prompts for confirmation before proceeding.
Perform Migration (Force)
tt migrate --force
Skips the confirmation prompt. Useful for automation/CI.
When to Use
You need to run migration if:
-
Upgrading from an older version: If youβre upgrading from a version that didnβt have town isolation, your existing keys need migration.
-
Seeing βno migration neededβ message: If the command reports no migration needed, your keys are already in the new format.
-
Sharing Redis between towns: Town isolation allows multiple Tinytown projects to use the same Redis instance without key conflicts.
Recovery
If migration fails partway through:
- Check which keys failed in the output
- Investigate the specific errors
- Run
tt migrateagain (idempotent - already-migrated keys wonβt be re-migrated)
See Also
- tt init - Initialize a new town
- tt reset - Reset all town state
- Towns concept - Understanding towns
tt mission
Autonomous multi-issue mission mode commands.
Synopsis
tt mission start [OPTIONS]
tt mission status [OPTIONS]
tt mission resume <RUN_ID>
tt mission stop <RUN_ID> [OPTIONS]
tt mission list [OPTIONS]
Description
Mission mode enables durable, dependency-aware orchestration of multiple GitHub issues with automatic PR/CI monitoring. Use these commands to start, monitor, and control missions.
Subcommands
tt mission start
Start a new mission with one or more objectives.
tt mission start --issue <ISSUE>... [--doc <PATH>...] [OPTIONS]
Options:
| Option | Short | Description |
|---|---|---|
--issue <ISSUE> | -i | GitHub issue number or URL (repeatable) |
--doc <PATH> | -d | Document path as objective (repeatable) |
--max-parallel <N> | Max parallel work items (default: 2) | |
--no-reviewer | Disable reviewer requirement |
Issue Formats:
23β Issue #23 in current repoowner/repo#23β Fully qualified issuehttps://github.com/owner/repo/issues/23β Full URL
Examples:
# Start with single issue
tt mission start --issue 23
# Multiple issues
tt mission start -i 23 -i 24 -i 25
# Cross-repo issues
tt mission start --issue my-org/other-repo#42
# Include a design doc
tt mission start --issue 23 --doc docs/design.md
# Allow more parallelism
tt mission start -i 23 -i 24 --max-parallel 4
# Skip reviewer gate
tt mission start -i 23 --no-reviewer
tt mission status
Show status of missions.
tt mission status [--run <ID>] [--work] [--watch]
Options:
| Option | Short | Description |
|---|---|---|
--run <ID> | -r | Show specific mission by ID |
--work | Show detailed work item status | |
--watch | Show watch items (PR/CI monitors) |
Examples:
# Show all active missions
tt mission status
# Specific mission with work items
tt mission status --run abc123 --work
# Include watch items
tt mission status -r abc123 --work --watch
Output:
π― Mission Status
ID: abc123-def456-...
State: π Running
Created: 2024-01-15 10:30:00 UTC
Updated: 2024-01-15 11:45:00 UTC
π Objectives: 3
- redis-field-engineering/tinytown#23
- redis-field-engineering/tinytown#24
- redis-field-engineering/tinytown#25
βοΈ Policy:
Max parallel: 2
Reviewer required: true
Auto-merge: false
Watch interval: 180s
π¦ Work Items: 5
π΅ ready Issue #23: Implement auth flow
π running Issue #24: Add rate limiting (β backend)
β³ pending Issue #25: Write tests
tt mission resume
Resume a stopped or blocked mission.
tt mission resume <RUN_ID>
Examples:
tt mission resume abc123-def456-...
tt mission stop
Stop an active mission.
tt mission stop <RUN_ID> [--force]
Options:
| Option | Description |
|---|---|
--force | Force stop without graceful cleanup |
Examples:
# Graceful stop (can be resumed)
tt mission stop abc123
# Force stop (cannot be resumed)
tt mission stop abc123 --force
tt mission list
List all missions.
tt mission list [--all]
Options:
| Option | Description |
|---|---|
--all | Include completed/failed missions |
Examples:
# Active missions only
tt mission list
# All missions including completed
tt mission list --all
Work Item States
| State | Emoji | Description |
|---|---|---|
| Pending | β³ | Waiting for dependencies |
| Ready | π΅ | Dependencies satisfied, can be assigned |
| Assigned | π | Assigned to an agent |
| Running | π | Agent is actively working |
| Blocked | π§ | Waiting on external event |
| Done | β | Completed successfully |
See Also
Migration Guide: From Gastown to Tinytown
Tried Gastown and found it overwhelming? Youβre not alone. Hereβs how to get the same results with Tinytown.
Why Youβre Here
Gastown is powerful but complex:
- 50+ concepts to understand
- Multiple agent types (Mayor, Deacon, Witness, Polecats, etc.)
- Two-level database architecture (Town beads + Rig beads)
- Daemon processes, patrols, and recovery mechanisms
- Hours to set up, days to understand
Tinytown gives you 90% of the value with 10% of the complexity.
Quick Comparison
| What you wanted | Gastown way | Tinytown way |
|---|---|---|
| Start orchestrating | 10+ commands | tt init |
| Create an agent | Complex Polecat setup | tt spawn worker |
| Assign work | gt sling + convoys | tt assign worker "task" |
| Check status | gt convoy list, gt feed | tt status |
| Understand it | Read 300K lines | Read 1,400 lines |
Concept Mapping
Gastown β Tinytown
| Gastown Concept | Tinytown Equivalent |
|---|---|
| Town | Town β (same name!) |
| Mayor | You (or your code) |
| Polecat | Agent |
| Beads | Tasks (simpler) |
| Convoy | Task groups (manual) |
| Hook | Agentβs inbox |
| Messages | |
| Witness | Your monitoring code |
| Refinery | Your CI/CD |
What Tinytown Doesnβt Have
Deliberately omitted for simplicity:
| Gastown Feature | Tinytown Alternative |
|---|---|
| Dolt SQL | Redis (simpler) |
| Git-backed beads | Redis persistence |
| Two-level DB | Single Redis instance |
| Daemon processes | Your process manages |
| Auto-recovery | Manual retry logic |
| Formulas | Write code directly |
| MEOW orchestration | Direct API calls |
Migration Steps
Step 1: Install Tinytown
git clone https://github.com/redis-field-engineering/tinytown.git
cd tinytown
cargo install --path .
Step 2: Initialize Your Project
Gastown:
# Multiple steps, daemon processes, config files...
gt boot
gt daemon start
# Configure rig, beads, etc.
Tinytown:
mkdir my-project && cd my-project
tt init --name my-project
# Done!
Step 3: Create Agents
Gastown:
# Configure polecat pools, spawn through Mayor...
gt mayor attach
# "Create a polecat for backend work"
Tinytown:
tt spawn backend --model claude
tt spawn frontend --model auggie
tt spawn reviewer --model codex
Step 4: Assign Work
Gastown:
# Create beads, slinging, convoys...
bd create --type task --title "Build API"
gt sling gt-abc12 gastown/polecats/Toast
gt convoy create "Feature X" gt-abc12
Tinytown:
tt assign backend "Build the REST API"
tt assign frontend "Build the UI"
Step 5: Monitor Progress
Gastown:
gt convoy list
gt convoy status hq-cv-abc
gt feed
gt dashboard # requires tmux
Tinytown:
tt status
tt list
Code Migration
Gastown Pattern: Tell the Mayor
# Gastown: Complex orchestration
# You tell Mayor what you want, Mayor figures out the rest
gt mayor attach
> Build a user authentication system with login, signup, and password reset
# Mayor creates convoy, assigns polecats, tracks progress...
Tinytown Pattern: Direct Control
#![allow(unused)]
fn main() {
// Tinytown: You're in control
let town = Town::connect(".").await?;
// Create your team
let designer = town.spawn_agent("designer", "claude").await?;
let backend = town.spawn_agent("backend", "auggie").await?;
let frontend = town.spawn_agent("frontend", "codex").await?;
// Assign work explicitly
designer.assign(Task::new("Design auth API schema")).await?;
wait_for_idle(&designer).await?;
backend.assign(Task::new("Implement auth endpoints")).await?;
frontend.assign(Task::new("Build login/signup UI")).await?;
// Wait for both
tokio::join!(
wait_for_idle(&backend),
wait_for_idle(&frontend)
);
}
When to Use Tinytown vs Gastown
Use Tinytown When:
β
You want to understand the system
β
You need something working in 30 seconds
β
Youβre coordinating 1-5 agents
β
You want to write your own orchestration logic
β
Simple is better than feature-rich
Use Gastown When:
β
You need 20+ concurrent agents
β
You need git-backed work history
β
You need automatic crash recovery
β
You need cross-project coordination
β
You have time to learn the system
Common Questions
Q: Can I use both? A: Yes! Start with Tinytown for simplicity. If you outgrow it, Gastownβs there.
Q: Is Tinytown production-ready? A: For small teams and projects, yes. For enterprise scale, consider Gastown.
Q: Can I migrate Tinytown work to Gastown? A: Tasks are JSON. You could write a converter to Beads format.
Q: Does Tinytown support everything Gastown does? A: No, and thatβs the point. Tinytown does less, but what it does is simple.
Concept Mapping: Gastown β Tinytown
A detailed translation guide for Gastown users.
Agent Taxonomy
Gastownβs Agent Zoo
Gastown has 8 agent types across two levels:
Town-Level:
| Agent | Role | Tinytown Equivalent |
|---|---|---|
| Mayor | Global coordinator | Your orchestration code |
| Deacon | Daemon, health monitoring | Your process + monitoring |
| Boot | Deacon watchdog | External health check |
| Dogs | Infrastructure helpers | Background tasks |
Rig-Level:
| Agent | Role | Tinytown Equivalent |
|---|---|---|
| Witness | Monitors polecats | Status polling loop |
| Refinery | Merge queue processor | CI/CD integration |
| Polecats | Workers | Agents |
| Crew | Human workspaces | N/A (youβre the human) |
Tinytownβs Simplicity
Tinytown has 2 agent types:
| Agent | Role |
|---|---|
| Supervisor | Well-known ID for coordination |
| Worker | Does the actual work |
Everything else? You write it explicitly.
Work Tracking
Gastown Beads
Beads are git-backed structured records:
ID: gt-abc12
Type: task
Title: Implement login API
Status: in_progress
Priority: P1
Created: 2024-03-01
Assigned: gastown/polecats/Toast
Parent: gt-xyz99 (epic)
Dependencies: [gt-def34, gt-ghi56]
Features:
- Stored in Dolt SQL
- Version controlled
- Two-level (Town + Rig)
- Rich schema with dependencies
- Prefix-based namespacing
Tinytown Tasks
Tasks are defined in tasks.toml:
[[tasks]]
id = "login-api"
description = "Implement login API"
agent = "backend"
status = "pending"
tags = ["auth", "api"]
Or assigned via CLI:
tt assign backend "Implement login API"
Features:
- Stored in Redis
- Defined in TOML (version-controlled)
- Single level
- Minimal schema
- Tags for organization
Translation
| Beads Feature | Tinytown Approach |
|---|---|
| Priority (P0-P4) | Use tags: ["P1"] |
| Type (task/bug/feature) | Use tags: ["bug"] |
| Dependencies | Manual coordination |
| Parent/child | parent_id field |
| Status history | Not built-in (log it yourself) |
Coordination Mechanisms
Gastown: Convoys
Convoys track batches of related work:
gt convoy create "User Auth Feature" gt-abc12 gt-def34 gt-ghi56
gt convoy status hq-cv-xyz
Features:
- Auto-created by Mayor
- Tracks multiple beads
- Lifecycle: OPEN β LANDED β CLOSED
- Event-driven completion detection
Tinytown: Manual Grouping
Use parent tasks or tags in tasks.toml:
# Option 1: Parent tasks
[[tasks]]
id = "auth-feature"
description = "User Auth Feature"
status = "pending"
[[tasks]]
id = "login"
description = "Login flow"
parent = "auth-feature"
agent = "backend"
status = "pending"
[[tasks]]
id = "signup"
description = "Signup flow"
parent = "auth-feature"
agent = "backend"
status = "pending"
# Option 2: Tags for grouping
[[tasks]]
id = "login-tagged"
description = "Login flow"
tags = ["auth-feature"]
agent = "backend"
status = "pending"
Gastown: Hooks
Hooks are the assignment mechanism:
Polecat has hook β Hook has pinned bead β Polecat MUST work on it
The βGUPP Principleβ: If work is on your hook, you run it immediately.
Tinytown: Inboxes
Messages go to agent inboxes (Redis lists):
Agent has inbox β Messages queued β Agent polls/blocks for messages
You control when and how agents process work.
Communication
Gastown: Mail Protocol
Messages are beads of type message:
# Check mail
gt mail check
# Types: POLECAT_DONE, MERGE_READY, REWORK_REQUEST, etc.
Complex routing through beads system.
Tinytown: Direct Messages
Messages are transient, stored in Redis. Send via CLI:
# Send a message to another agent
tt send reviewer "Task complete. Ready for review."
# Send an urgent message
tt send reviewer --urgent "Critical issue found!"
Direct, simple, explicit.
State Persistence
Gastown: Multi-Layer
- Git worktrees - Sandbox persistence
- Beads ledger - Work state (Dolt SQL)
- Hooks - Work assignment
- State files - Runtime state (JSON)
Tinytown: Redis
Everything in Redis with town-isolated keys:
tt:<town>:agent:<id>- Agent statett:<town>:task:<id>- Task statett:<town>:inbox:<id>- Message queues
Enable Redis persistence (RDB/AOF) for durability. Multiple towns can share the same Redis instance.
Recovery
Gastown: Automatic
- Witness patrols detect stalled polecats
- Deacon monitors system health
- Boot watches Deacon
- Hooks ensure work resumes on restart
Tinytown: Manual
You implement recovery via CLI:
# Check agent health
tt status
tt list
# If an agent is in error state, respawn it
tt prune
tt spawn worker-1 --model claude
# Reassign failed tasks
tt assign worker-1 "Retry the failed operation"
When Tinytown Falls Short
Gastown features you might miss:
| Feature | Why Itβs Useful | Tinytown Workaround |
|---|---|---|
| Automatic recovery | Hands-off operation | Write recovery loops |
| Git-backed history | Audit trail | Log to files |
| Dependency graphs | Complex workflows | Manual ordering |
| Cross-rig work | Multi-repo coordination | Run multiple towns |
| Dashboard | Visual monitoring | CLI + custom tooling |
If you find yourself building these features, consider whether Gastownβs complexity is justified for your use case.
Why Tinytown?
A philosophical guide to choosing simplicity.
The Problem with Complex Systems
Gastown is impressive engineering. It has:
- Automatic crash recovery
- Git-backed work history
- Multi-agent coordination
- Visual dashboards
- Sophisticated orchestration
But it also has:
- 317,898 lines of code to understand
- 50+ concepts to learn
- Hours of setup before your first task
- Days of learning before youβre productive
The Tinytown Philosophy
βMake it work. Make it simple. Stop.β
Tinytown takes a different approach:
1. You Donβt Need Most Features
90% of multi-agent orchestration is:
- Create agents
- Assign tasks
- Wait for completion
- Check results
Tinytown does exactly this. Nothing more.
2. Complexity Compounds
Every feature adds:
- Code to maintain
- Concepts to learn
- Bugs to fix
- Documentation to write
Tinytown has 1,448 lines of code. You can read the entire codebase in an afternoon.
3. Explicit is Better Than Magic
Gastownβs Mayor βfigures things outβ for you:
gt mayor attach
> Build an authentication system
# Mayor creates convoy, spawns agents, distributes work...
Tinytown makes you say what you want:
#![allow(unused)]
fn main() {
architect.assign(Task::new("Design auth system")).await?;
developer.assign(Task::new("Implement auth")).await?;
tester.assign(Task::new("Test auth")).await?;
}
More typing, but you know exactly whatβs happening.
4. Recovery is Your Responsibility
Gastown: Witness patrols, Deacon monitors, Boot watches Deaconβ¦
Tinytown: You write a loop:
#![allow(unused)]
fn main() {
if agent.state == AgentState::Error {
respawn_and_retry(agent).await?;
}
}
Is this more work? Yes. Is it simpler to understand? Also yes.
The Tradeoffs
What You Gain
β
Understanding β You know how it works
β
Speed β Running in 30 seconds
β
Debuggability β 1,400 lines to read
β
Control β You decide everything
β
Simplicity β 5 concepts total
What You Lose
β Automation β You write recovery logic
β Scale β Designed for 1-10 agents
β History β No git-backed audit trail
β Visualization β No built-in dashboard
β Federation β Single machine focus
When to Choose What
Choose Tinytown If:
- Youβre learning agent orchestration
- You want to ship something today
- You have 1-5 agents
- You prefer explicit over magic
- You value understanding over features
Choose Gastown If:
- You need 20+ concurrent agents
- You need audit trails
- You need automatic recovery
- You need cross-project coordination
- You have time to learn the system
Choose Both If:
Start with Tinytown. Learn the patterns. If you outgrow it, Gastown will make more sense because you understand what problems itβs solving.
A Practical Test
Ask yourself:
-
How many agents do I need?
- 1-5: Tinytown
- 10+: Consider Gastown
-
How important is automatic recovery?
- Nice to have: Tinytown
- Critical: Gastown
-
How much time do I have?
- Minutes: Tinytown
- Days/weeks: Either
-
Do I want to understand the system?
- Yes: Tinytown
- No, just make it work: Gastown (eventually)
The Honest Answer
Tinytown exists because Gastown is hard to start with.
If youβve bounced off Gastown, Tinytown gets you running. You can always graduate to Gastown laterβand youβll appreciate its features more because youβve felt the pain of not having them.
Start simple. Add complexity only when you need it.
βPerfection is achieved not when there is nothing more to add, but when there is nothing left to take away.β β Antoine de Saint-ExupΓ©ry
Custom Models
Add your own AI model configurations to Tinytown.
Model Configuration
Models are defined in tinytown.toml:
[agent_clis.claude]
name = "claude"
command = "claude --print"
[agent_clis.my-custom-model]
name = "my-custom-model"
command = "/path/to/my/agent --config ./agent.yaml"
workdir = "/path/to/working/dir"
[agent_clis.my-custom-model.env]
API_KEY = "secret"
MODEL_VERSION = "v2"
Model Properties
| Property | Required | Description |
|---|---|---|
name | Yes | Identifier for --model flag |
command | Yes | Shell command to run the agent |
workdir | No | Working directory for the command |
env | No | Environment variables |
Example: Local LLM
[agent_clis.local-llama]
name = "local-llama"
command = "llama-cli --model llama-3-70b --prompt-file task.txt"
workdir = "~/.local/share/llama"
[agent_clis.local-llama.env]
CUDA_VISIBLE_DEVICES = "0"
Usage:
tt spawn worker-1 --model local-llama
Example: Custom Script
Create a wrapper script:
#!/bin/bash
# ~/bin/my-agent.sh
# Read task from stdin or argument
TASK="$1"
# Your custom agent logic
python3 ~/agents/my_agent.py --task "$TASK"
Configure:
[agent_clis.my-agent]
name = "my-agent"
command = "~/bin/my-agent.sh"
Example: Docker Container
[agent_clis.docker-agent]
name = "docker-agent"
command = "docker run --rm -v $(pwd):/workspace my-agent:latest"
Programmatic Model Registration
In Rust code:
#![allow(unused)]
fn main() {
use tinytown::agent::AgentModel;
// Create custom model
let model = AgentModel::new("my-model", "my-command --flag")
.with_workdir("/path/to/workdir")
.with_env("API_KEY", "secret");
// Models are used when spawning
town.spawn_agent("worker", "my-model").await?;
}
Best Practices
- Use absolute paths β Relative paths may break
- Handle stdin/stdout β Agents should read tasks from messages
- Set timeouts β Donβt let agents run forever
- Log output β Direct to
logs/directory - Test locally first β Before adding to config
Troubleshooting
Command Not Found
# Check the command works directly
/path/to/my/agent --help
Environment Variables Not Set
# Debug by adding echo
"command": "env && /path/to/agent"
Working Directory Issues
Use absolute paths:
workdir = "/Users/you/agents"
Not:
workdir = "./agents" # May not resolve correctly
Redis Configuration
Tinytown uses Redis for message passing and state storage. Hereβs how to configure and optimize it.
Default Setup
By default, Tinytown:
- Starts a local Redis server
- Uses a Unix socket at
./redis.sock - Disables TCP (port 0)
- Runs in-memory only
Unix Socket vs TCP
Unix Socket (Default)
{
"redis": {
"use_socket": true,
"socket_path": "redis.sock"
}
}
Pros:
- ~10x faster latency (~0.1ms vs ~1ms)
- No network overhead
- No port conflicts
Cons:
- Local only (same machine)
- File permissions matter
TCP Connection
[redis]
use_socket = false
host = "127.0.0.1"
port = 6379
bind = "127.0.0.1"
Use for:
- Remote Redis servers
- Docker containers
- Networked deployments
Security
Password Authentication
Enable password authentication for TCP connections:
[redis]
use_socket = false
host = "127.0.0.1"
port = 6379
password = "your-secret-password"
Note: Password is required when binding to non-localhost addresses.
TLS Encryption
Enable TLS for encrypted connections:
[redis]
use_socket = false
host = "redis.example.com"
port = 6379
password = "secret123"
tls_enabled = true
tls_cert = "/etc/ssl/redis.crt"
tls_key = "/etc/ssl/redis.key"
tls_ca_cert = "/etc/ssl/ca.crt"
When TLS is enabled:
- Tinytown uses the
rediss://URL scheme - The non-TLS port is disabled
- Certificates are passed to Redis server on startup
Security Recommendations
- Use Unix sockets for local development - Most secure, no network exposure
- Bind to localhost (
127.0.0.1) when possible - Always use password for non-localhost bindings
- Enable TLS for production and remote connections
- Use environment variables for passwords in CI/CD
Connecting to External Redis
Use an existing Redis server instead of starting one:
[redis]
use_socket = false
host = "redis.example.com"
port = 6379
password = "your-password"
Tinytown will connect without starting a new server (external hosts are auto-detected).
Persistence
By default, Redis runs in-memory. Data is lost on restart.
Enable RDB Snapshots
redis-cli -s ./redis.sock CONFIG SET save "60 1"
Saves every 60 seconds if at least 1 key changed.
Enable AOF (Append Only File)
redis-cli -s ./redis.sock CONFIG SET appendonly yes
redis-cli -s ./redis.sock CONFIG SET appendfsync everysec
Logs every write. More durable but slower.
Recommended Production Settings
# Save every 5 min if 1+ changes, every 1 min if 100+ changes
redis-cli CONFIG SET save "300 1 60 100"
# Enable AOF with fsync every second
redis-cli CONFIG SET appendonly yes
redis-cli CONFIG SET appendfsync everysec
Memory Management
Set Memory Limit
redis-cli CONFIG SET maxmemory 256mb
redis-cli CONFIG SET maxmemory-policy allkeys-lru
Monitor Memory
redis-cli INFO memory
Key Patterns
Tinytown uses town-isolated key patterns:
| Pattern | Type | Purpose |
|---|---|---|
tt:<town>:inbox:<uuid> | List | Agent message queues |
tt:<town>:agent:<uuid> | String | Agent state (JSON) |
tt:<town>:task:<uuid> | String | Task state (JSON) |
tt:broadcast | Pub/Sub | Broadcast channel |
This town-isolated format allows multiple Tinytown projects to share the same Redis instance. See tt migrate for upgrading from older key formats.
Debugging
Connect to Redis
# Unix socket
redis-cli -s ./redis.sock
# TCP
redis-cli -h 127.0.0.1 -p 6379
Useful Commands
# List all tinytown keys for a town
KEYS tt:<town_name>:*
# Check inbox length
LLEN tt:<town_name>:inbox:550e8400-...
# View agent state
GET tt:<town_name>:agent:550e8400-...
# Monitor all operations
MONITOR
# Get server info
INFO
Clear All Data
# Danger: Deletes everything!
redis-cli -s ./redis.sock FLUSHALL
Docker Deployment
# docker-compose.yml
version: '3'
services:
redis:
image: redis:8
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
redis-data:
Then configure Tinytown:
[redis]
use_socket = false
host = "localhost"
port = 6379
password = "your-docker-redis-password"
Performance Tuning
For Low Latency
- Use Unix sockets
- Disable persistence (if acceptable)
- Use local SSD
For Durability
- Enable AOF with
everysec - Use persistent storage
- Set up replication (advanced)
For High Throughput
- Increase
tcp-backlog - Tune
timeoutandtcp-keepalive - Use pipelining in code
Townhall Control Plane
Townhall is the HTTP control plane for Tinytown. It exposes all Tinytown operations via REST API and MCP (Model Context Protocol), enabling remote management from web clients, mobile apps, and AI tools.
Quick Start
# Start townhall daemon (REST API on port 8787)
townhall
# With verbose logging
townhall --verbose
# Custom port
townhall rest --port 9000
# For a specific town
townhall --town /path/to/project
Modes
Townhall supports three modes:
| Mode | Command | Transport | Use Case |
|---|---|---|---|
| REST API | townhall rest | HTTP/JSON | Web/mobile clients, scripts |
| MCP stdio | townhall mcp-stdio | stdin/stdout | IDE extensions, Claude Desktop |
| MCP HTTP | townhall mcp-http | HTTP/SSE | Browser MCP clients |
REST API (Default)
townhall rest --bind 127.0.0.1 --port 8787
All CLI operations are available as HTTP endpoints:
# Get status
curl http://localhost:8787/v1/status
# List agents
curl http://localhost:8787/v1/agents
# Spawn agent
curl -X POST http://localhost:8787/v1/agents \
-H "Content-Type: application/json" \
-d '{"name": "worker-1", "cli": "claude"}'
# Assign task
curl -X POST http://localhost:8787/v1/tasks/assign \
-H "Content-Type: application/json" \
-d '{"agent": "worker-1", "task": "Fix the bug"}'
MCP Mode
For AI tool integration (Claude Desktop, VS Code, etc.):
# stdio transport (for Claude Desktop)
townhall mcp-stdio
# HTTP/SSE transport (for web clients)
townhall mcp-http --port 8788
See MCP Interface for detailed MCP documentation.
API Reference
Endpoints
| Endpoint | Method | Scope | Description |
|---|---|---|---|
/healthz | GET | public | Health check |
/v1/town | GET | town.read | Get town info |
/v1/status | GET | town.read | Get full status |
/v1/agents | GET | town.read | List agents |
/v1/agents | POST | agent.manage | Spawn agent |
/v1/agents/{agent}/kill | POST | agent.manage | Stop agent |
/v1/agents/{agent}/restart | POST | agent.manage | Restart agent |
/v1/agents/prune | POST | agent.manage | Prune dead agents |
/v1/tasks/assign | POST | town.write | Assign task |
/v1/tasks/pending | GET | town.read | List pending tasks |
/v1/backlog | GET | town.read | List backlog |
/v1/backlog | POST | town.write | Add to backlog |
/v1/backlog/{task_id}/claim | POST | town.write | Claim backlog task |
/v1/backlog/assign-all | POST | town.write | Assign all backlog |
/v1/backlog/{task_id} | DELETE | town.write | Remove backlog task |
/v1/messages/send | POST | town.write | Send message |
/v1/agents/{agent}/inbox | GET | town.read | Get inbox |
/v1/recover | POST | agent.manage | Recover orphaned agents |
/v1/reclaim | POST | agent.manage | Reclaim tasks |
See the OpenAPI spec for complete API documentation.
Error Handling
Errors follow RFC 7807 Problem Details:
{
"type": "https://tinytown.dev/errors/404",
"title": "Not Found",
"status": 404,
"detail": "Agent 'worker-99' not found"
}
Configuration
Configure townhall in tinytown.toml:
[townhall]
bind = "127.0.0.1" # Bind address (default: 127.0.0.1)
rest_port = 8787 # REST API port (default: 8787)
request_timeout_ms = 30000 # Request timeout (default: 30s)
For production deployments, enable Authentication:
[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$..." # Use: tt generate-api-key
Security
Startup Safety Rules
Townhall enforces security by default:
- Non-loopback binding requires authentication - Cannot bind to
0.0.0.0withauth.mode = "none" - Warnings for API key on non-loopback - Recommends OIDC for production
- TLS/mTLS validation - Fails fast on invalid certificate configuration
Best Practices
- Use
127.0.0.1for local development - Enable API key or OIDC authentication for any network exposure
- Enable TLS for production deployments
- Use mTLS for service-to-service communication
Townhall REST API
townhall is Tinytownβs REST control plane daemon. It exposes the same orchestration services used by the tt CLI over HTTP.
Start the Server
# From a town directory
townhall
# Explicit REST mode
townhall rest
# Override bind/port
townhall rest --bind 127.0.0.1 --port 8080
Defaults come from tinytown.toml:
[townhall]
bind = "127.0.0.1"
rest_port = 8080
request_timeout_ms = 30000
Endpoint Groups
The router is split into public/read/write/management groups:
- Public:
GET /healthz - Read (
town.read):GET /v1/town,GET /v1/status,GET /v1/agents,GET /v1/tasks/pending,GET /v1/backlog,GET /v1/agents/{agent}/inbox - Write (
town.write):POST /v1/tasks/assign,POST /v1/backlog,POST /v1/backlog/{task_id}/claim,POST /v1/backlog/assign-all,DELETE /v1/backlog/{task_id},POST /v1/messages/send - Agent management (
agent.manage):POST /v1/agents,POST /v1/agents/{agent}/kill,POST /v1/agents/{agent}/restart,POST /v1/agents/prune,POST /v1/recover,POST /v1/reclaim
Authentication
townhall supports three auth modes in config:
none(default): local/no-auth modeapi_key: API key viaAuthorization: Bearer <key>orX-API-Keyoidc: declared in config, not yet implemented in middleware
Generate an API key + Argon2 hash:
tt auth gen-key
Then configure:
[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$..."
api_key_scopes = ["town.read", "town.write", "agent.manage"]
Example request:
curl -H "Authorization: Bearer $TOWNHALL_API_KEY" \
http://127.0.0.1:8080/v1/status
Startup Safety Rules
At startup, townhall fails fast when:
- Binding to a non-loopback address with
auth.mode = "none" - TLS is enabled but
cert_file/key_fileare missing or invalid - mTLS is required but
ca_fileis missing or invalid
OpenAPI Spec
The REST contract is documented in:
docs/openapi/townhall-v1.yaml
You can load this file in Swagger UI, Stoplight, or Redoc for interactive exploration.
Townhall MCP Server
Tinytown includes an MCP server in the townhall binary for LLM/tooling integrations.
Start MCP
# MCP over stdio (for local MCP clients)
townhall mcp-stdio
# MCP over HTTP/SSE
townhall mcp-http
# Override bind/port (default port = rest_port + 1)
townhall mcp-http --bind 127.0.0.1 --port 8081
Registered MCP Tools
Read tools:
town.get_statusagent.listagent.inboxtask.list_pendingbacklog.list
Write tools:
task.assignmessage.sendbacklog.addbacklog.claimbacklog.assign_allbacklog.remove
Agent-management/recovery tools:
agent.spawnagent.killagent.restartagent.prunerecovery.recover_agentsrecovery.reclaim_tasks
Tool responses are JSON payloads wrapped as:
{
"success": true,
"data": {},
"error": null
}
Registered MCP Resources
Static resources:
tinytown://town/currenttinytown://agentstinytown://backlog
Resource templates:
tinytown://agents/{agent_name}tinytown://tasks/{task_id}
Registered MCP Prompts
conductor.startup_contextagent.role_hint(agent_namerequired,tagsoptional)
Notes
- MCP tools call the same Tinytown service layer used by CLI and REST.
mcp-httpuses Tower MCPβs HTTP/SSE transport and follows standard MCP message semantics.
MCP Interface
Tinytown exposes a full Model Context Protocol (MCP) interface, allowing AI tools like Claude Desktop, Cursor, and other MCP clients to directly orchestrate agent towns.
Quick Start
Claude Desktop Integration
Add to ~/.config/claude/claude_desktop_config.json:
{
"mcpServers": {
"tinytown": {
"command": "townhall",
"args": ["mcp-stdio", "--town", "/path/to/your/project"]
}
}
}
Restart Claude Desktop. You can now ask Claude to manage your agent town!
HTTP/SSE Mode
For browser-based MCP clients:
townhall mcp-http --port 8788
# MCP endpoint: http://localhost:8788
Available Tools
MCP tools provide programmatic access to all Tinytown operations:
Read-Only Tools (town.read scope)
| Tool | Description |
|---|---|
town.get_status | Get town status including all agents |
agent.list | List all agents with current status |
backlog.list | List all tasks in the backlog |
Write Tools (town.write scope)
| Tool | Description |
|---|---|
task.assign | Assign a task to an agent |
task.complete | Mark a task as completed |
message.send | Send a message to an agent |
backlog.add | Add a task to the backlog |
backlog.claim | Claim a backlog task for an agent |
backlog.assign_all | Assign all backlog tasks to an agent |
Agent Management Tools (agent.manage scope)
| Tool | Description |
|---|---|
agent.spawn | Spawn a new agent |
agent.kill | Kill (stop) an agent gracefully |
agent.restart | Restart a stopped agent |
recovery.recover_agents | Recover orphaned agents |
recovery.reclaim_tasks | Reclaim tasks from dead agents |
Resources
MCP resources provide read-only data access:
| Resource URI | Description |
|---|---|
tinytown://town/current | Current town state |
tinytown://agents | List of all agents |
tinytown://agents/{name} | Specific agent details |
tinytown://backlog | Current backlog |
tinytown://tasks/{id} | Specific task details |
Prompts
MCP prompts provide templated interactions:
| Prompt | Description |
|---|---|
conductor.startup_context | Context for conductor startup |
agent.role_hint | Role hints for agents |
Example Conversation
With MCP configured, you can have natural conversations with Claude:
You: βSpawn two backend agents and assign them bug fixesβ
Claude: Iβll create two backend agents and assign tasks to them.
[Uses agent.spawn tool twice, then task.assign tool]
Done! Iβve spawned backend-1 and backend-2 and assigned bug fix tasks to each.
You: βWhatβs the status of the town?β
Claude: [Uses town.get_status tool] Your town has 2 agents running:
backend-1: Working, 3 tasks completedbackend-2: Working, 2 tasks completed
Tool Response Format
All tools return JSON with consistent structure:
{
"success": true,
"data": {
"agent_id": "abc123...",
"name": "worker-1",
"cli": "claude"
}
}
On error:
{
"success": false,
"error": "Agent 'worker-99' not found"
}
Transports
| Transport | Command | Best For |
|---|---|---|
| stdio | townhall mcp-stdio | Claude Desktop, IDE extensions |
| HTTP/SSE | townhall mcp-http | Browser clients, remote access |
stdio Transport
Used by most desktop applications. The MCP server reads JSON-RPC from stdin and writes to stdout.
HTTP/SSE Transport
Uses Server-Sent Events for server-to-client messages and HTTP POST for client-to-server messages.
# Start on custom port
townhall mcp-http --bind 0.0.0.0 --port 8788
Authentication & Authorization
Townhall supports multiple authentication modes and fine-grained authorization scopes for secure remote access.
Authentication Modes
None (Default)
No authentication required. Only safe on loopback (127.0.0.1).
[townhall.auth]
mode = "none" # Default
β οΈ Security: Townhall will refuse to start if binding to non-loopback addresses with
mode = "none".
API Key
Secure token-based authentication using Argon2id password hashing.
[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$m=19456,t=2,p=1$..."
api_key_scopes = ["town.read", "town.write"] # Optional: restrict scopes
Generating an API Key
# Generate a new API key (outputs raw key and hash)
tt generate-api-key
# Output:
# API Key: abc123def456...
# Hash: $argon2id$v=19$m=19456,t=2,p=1$...
#
# Add to tinytown.toml:
# [townhall.auth]
# mode = "api_key"
# api_key_hash = "$argon2id$v=19$..."
Important: Store the raw API key securely. Only the hash is stored in config.
Using API Keys
Pass the key via Authorization header or X-API-Key:
# Bearer token (recommended)
curl -H "Authorization: Bearer <your-api-key>" \
http://localhost:8787/v1/status
# X-API-Key header
curl -H "X-API-Key: <your-api-key>" \
http://localhost:8787/v1/status
OIDC (Coming Soon)
OpenID Connect authentication for enterprise deployments.
[townhall.auth]
mode = "oidc"
issuer = "https://issuer.example.com"
audience = "tinytown-api"
jwks_url = "https://issuer.example.com/.well-known/jwks.json"
required_scopes = ["tinytown:access"]
clock_skew_seconds = 60
Authorization Scopes
All endpoints require specific scopes for access:
| Scope | Description | Endpoints |
|---|---|---|
town.read | Read status and agent info | GET /v1/*, inbox |
town.write | Assign tasks, send messages | POST /v1/tasks/*, messages, backlog |
agent.manage | Spawn/kill/restart agents | POST /v1/agents, recovery |
admin | Full access (grants all scopes) | All endpoints |
Configuring Scopes
For API key auth, configure allowed scopes:
[townhall.auth]
mode = "api_key"
api_key_hash = "..."
api_key_scopes = ["town.read", "town.write"] # Read/write but no agent management
Empty api_key_scopes grants admin access (all scopes).
Audit Logging
All mutating operations (POST, PUT, DELETE, PATCH) are logged:
INFO audit: operation completed request_id="abc123" principal="api_key" method="POST" path="/v1/agents" result="success"
WARN audit: operation denied request_id="def456" principal="api_key" method="POST" path="/v1/recover" result="denied"
Audit events include:
- Request ID - Unique identifier for correlation
- Principal ID - Who made the request
- Scopes - What permissions they had
- Method/Path - What they tried to do
- Result - success, denied, or error
Security Notes
- Authorization headers are never logged
- API keys are stored as Argon2id hashes, never plaintext
- Auth errors use constant-time responses to prevent timing attacks
TLS Configuration
For production deployments, enable TLS:
[townhall.tls]
enabled = true
cert_path = "/path/to/server.crt"
key_path = "/path/to/server.key"
Mutual TLS (mTLS)
For service-to-service authentication:
[townhall.mtls]
enabled = true
required = true # Reject clients without valid certs
ca_path = "/path/to/ca.crt"
Security Best Practices
- Local development: Use
mode = "none"withbind = "127.0.0.1" - Team access: Use
mode = "api_key"with TLS - Production: Use
mode = "oidc"with TLS and mTLS - Always scope API keys to minimum required permissions
- Rotate API keys regularly
API Reference
The Tinytown Rust API for programmatic control.
Quick Links
- Townhall REST API β HTTP control plane daemon
- Townhall MCP Server β MCP tools/resources/prompts
- Full rustdoc β Complete API documentation
Core Types
Town
#![allow(unused)]
fn main() {
use tinytown::Town;
// Initialize new town
let town = Town::init("./path", "name").await?;
// Connect to existing
let town = Town::connect("./path").await?;
// Operations
let agent = town.spawn_agent("name", "cli").await?;
let agent = town.agent("name").await?;
let agents = town.list_agents().await;
let channel = town.channel();
let config = town.config();
let root = town.root();
}
Agent
#![allow(unused)]
fn main() {
use tinytown::{Agent, AgentId, AgentType, AgentState};
// Create agent
let agent = Agent::new("name", "cli", AgentType::Worker);
// Supervisor (well-known ID)
let supervisor = Agent::supervisor("coordinator");
// Check state
if agent.state.is_terminal() { /* stopped or error */ }
if agent.state.can_accept_work() { /* idle */ }
}
AgentHandle
#![allow(unused)]
fn main() {
// Get handle from town
let handle = town.spawn_agent("worker", "claude").await?;
// Operations
let id = handle.id();
let task_id = handle.assign(task).await?;
handle.send(MessageType::StatusRequest).await?;
let len = handle.inbox_len().await?;
let state = handle.state().await?;
handle.wait().await?;
}
Task
#![allow(unused)]
fn main() {
use tinytown::{Task, TaskId, TaskState};
// Create
let task = Task::new("description");
let task = Task::new("desc").with_tags(["tag1", "tag2"]);
let task = Task::new("desc").with_parent(parent_id);
// Lifecycle
task.assign(agent_id);
task.start();
task.complete("result");
task.fail("error");
// Check state
if task.state.is_terminal() { /* completed, failed, or cancelled */ }
}
Message
#![allow(unused)]
fn main() {
use tinytown::{Message, MessageId, MessageType, Priority};
// Create
let msg = Message::new(from, to, MessageType::TaskAssign {
task_id: "abc".into()
});
// With options
let msg = msg.with_priority(Priority::Urgent);
let msg = msg.with_correlation(other_msg.id);
}
MessageType
#![allow(unused)]
fn main() {
pub enum MessageType {
// Semantic types for inter-agent communication
Task { description: String },
Query { question: String },
Informational { summary: String },
Confirmation { ack_type: ConfirmationType },
// Task lifecycle
TaskAssign { task_id: String },
TaskDone { task_id: String, result: String },
TaskFailed { task_id: String, error: String },
// Status
StatusRequest,
StatusResponse { state: String, current_task: Option<String> },
// Lifecycle
Ping,
Pong,
Shutdown,
// Extensibility
Custom { kind: String, payload: String },
}
pub enum ConfirmationType {
Received,
Acknowledged,
Thanks,
Approved,
Rejected { reason: String },
}
}
Helpers: msg.is_actionable(), msg.is_informational_or_confirmation()
Channel
#![allow(unused)]
fn main() {
use tinytown::Channel;
use std::time::Duration;
let channel = town.channel();
// Messages
channel.send(&msg).await?;
let msg = channel.receive(agent_id, Duration::from_secs(30)).await?;
let msg = channel.try_receive(agent_id).await?;
let len = channel.inbox_len(agent_id).await?;
channel.broadcast(&msg).await?;
// State
channel.set_agent_state(&agent).await?;
let agent = channel.get_agent_state(agent_id).await?;
channel.set_task(&task).await?;
let task = channel.get_task(task_id).await?;
}
Error Handling
#![allow(unused)]
fn main() {
use tinytown::{Error, Result};
match result {
Ok(value) => { /* success */ }
Err(Error::Redis(e)) => { /* redis error */ }
Err(Error::AgentNotFound(name)) => { /* agent missing */ }
Err(Error::TaskNotFound(id)) => { /* task missing */ }
Err(Error::NotInitialized(path)) => { /* town not init */ }
Err(Error::RedisNotInstalled) => { /* redis missing */ }
Err(Error::RedisVersionTooOld(ver)) => { /* upgrade redis */ }
Err(Error::Timeout(msg)) => { /* operation timed out */ }
Err(e) => { /* other error */ }
}
}
Example: Complete Workflow
use tinytown::{Town, Task, AgentState, Result};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<()> {
// Connect
let town = Town::connect(".").await?;
// Spawn agents
let dev = town.spawn_agent("dev", "claude").await?;
let reviewer = town.spawn_agent("reviewer", "codex").await?;
// Assign work
dev.assign(Task::new("Build the feature")).await?;
// Wait for completion
loop {
if let Some(agent) = dev.state().await? {
if matches!(agent.state, AgentState::Idle) {
break;
}
}
tokio::time::sleep(Duration::from_secs(5)).await;
}
// Send for review
reviewer.assign(Task::new("Review the feature")).await?;
Ok(())
}