Keyboard shortcuts

Press ← or β†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Tinytown

Simple multi-agent orchestration using Redis β€” All the power, none of the complexity.

Welcome to Tinytown! 🏘️

Tinytown is a minimal, blazing-fast multi-agent orchestration system. It lets you coordinate AI coding agents (Claude, Augment, Codex, and more) using Redis for message passing.

Why Tinytown?

If you’ve tried to set up complex orchestration systems like Gastown and found yourself drowning in configuration files, agent taxonomies, and recovery mechanisms β€” Tinytown is for you.

What you wantComplex systemsTinytown
Get startedHours of setup30 seconds
Understand it50+ concepts5 types
Configure it10+ config files1 TOML file
Debug itNavigate 300K+ linesRead 1,400 lines

Core Philosophy

Simplicity is a feature, not a limitation.

Tinytown does less, so you can do more. We include only what you need:

βœ… Spawn and manage agents
βœ… Assign tasks and track state
βœ… Keep unassigned work in a shared backlog
βœ… Pass messages between agents
βœ… Persist work in Redis

And we deliberately leave out:

❌ Complex workflow DAGs
❌ Distributed transactions
❌ Recovery daemons
❌ Multi-layer databases

When you need those features, you’ll know β€” and you can add them yourself in a few lines of code, or upgrade to a more complex system.

Quick Example

# Initialize a town
tt init --name my-project

# Spawn agents (uses default model, or specify with --model)
tt spawn frontend
tt spawn backend
tt spawn reviewer

# Assign tasks
tt assign frontend "Build the login page"
tt assign backend "Create the auth API"
tt assign reviewer "Review PRs when ready"

# Or park unassigned tasks for role-based claiming
tt backlog add "Harden auth error handling" --tags backend,security
tt backlog list

# Check status
tt status

# Or let the conductor orchestrate for you!
tt conductor
# "Build a user authentication system"
# Conductor spawns agents, breaks down tasks, and coordinates...

That’s it. Your agents are now coordinating via Redis.

Plan Work with tasks.toml

For complex workflows, define tasks in a file:

tt plan --init   # Creates tasks.toml

Edit tasks.toml to define your pipeline:

[[tasks]]
id = "auth-api"
description = "Build the auth API"
agent = "backend"
status = "pending"

[[tasks]]
id = "auth-tests"
description = "Write auth tests"
agent = "tester"
parent = "auth-api"
status = "pending"

Then sync to Redis and let agents work:

tt sync push
tt conductor

See tt plan for the full task DSL.

What’s Next?

Named After

Tiny Town, Colorado β€” a miniature village with big charm, just like this project! πŸ”οΈ

Installation

Getting Tinytown running takes about 30 seconds.

Step 1: Install Tinytown

cargo install tinytown

From Source

git clone https://github.com/redis-field-engineering/tinytown.git
cd tinytown
cargo install --path .

Need Rust? Install it via rustup: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Step 2: Install Redis 8.0+

Tinytown requires Redis 8.0 or later.

Let Tinytown download and build Redis for you using an AI agent:

tt bootstrap
export PATH="$HOME/.tt/bin:$PATH"

This gets you the latest Redis compiled and optimized for your machine. Add the export to ~/.zshrc or ~/.bashrc for persistence.

Option 2: Package Manager

macOS:

brew install redis

Ubuntu/Debian:

curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt-get update
sudo apt-get install redis

Option 3: From Source (Manual)

curl -LO https://github.com/redis/redis/archive/refs/tags/8.0.2.tar.gz
tar xzf 8.0.2.tar.gz
cd redis-8.0.2 && make && sudo make install

For more options, see the Redis downloads page.

Step 3: Verify Installation

# Check tt is installed
tt --help

# Should output:
# Tinytown - Simple multi-agent orchestration using Redis
# ...

# Verify Redis version
redis-server --version
# Should show v=8.x.x or higher

What’s Next?

You’re ready to go! Head to the Quick Start to create your first town.

Quick Start

Let’s get a multi-agent workflow running in under 5 minutes.

First time? Make sure you’ve installed Tinytown first.

Step 1: Initialize a Town

A Town is your orchestration workspace. It manages Redis, agents, and message passing.

# Create a project directory
mkdir my-project && cd my-project

# Initialize the town
tt init --name my-project

You’ll see:

✨ Initialized town 'my-project' at .
πŸ“‘ Redis running with Unix socket for fast message passing
πŸš€ Run 'tt spawn <name>' to create agents

This creates:

  • tinytown.toml β€” Configuration file
  • agents/ β€” Agent working directories
  • logs/ β€” Activity logs
  • tasks/ β€” Task storage
  • redis.sock β€” Unix socket for fast Redis communication

Step 2: Spawn an Agent

Agents are workers that execute tasks. Spawn one:

tt spawn worker-1

Output:

πŸ€– Spawned agent 'worker-1' using model 'claude'
   ID: 550e8400-e29b-41d4-a716-446655440000

The agent uses default_model from your config. Spawn more:

tt spawn worker-2
tt spawn reviewer

Or override with --model:

tt spawn specialist --model codex

Step 3: Assign Tasks

Give your agents something to do:

tt assign worker-1 "Implement the user login API endpoint"
tt assign worker-2 "Write tests for the login API"
tt assign reviewer "Review the login implementation when ready"

Step 4: Check Status

See what’s happening in your town:

tt status

Output:

🏘️  Town: my-project
πŸ“‚ Root: /path/to/my-project
πŸ“‘ Redis: unix:///path/to/my-project/redis.sock
πŸ€– Agents: 3
   worker-1 (Working) - 0 messages pending
   worker-2 (Idle) - 1 messages pending
   reviewer (Idle) - 1 messages pending

List all agents:

tt list

Step 5: Keep It Running

To keep a town connection open during development:

tt start

Press Ctrl+C to stop gracefully.

What Just Happened?

You created a Town with three Agents. Each agent received a Task via a Message sent through a Redis Channel.

That’s the entire Tinytown model:

Town β†’ spawns β†’ Agents
       ↓
     Channel (Redis)
       ↓
     Messages β†’ contain β†’ Tasks

Bonus: Planning with tasks.toml

For larger projects, define tasks in a file instead of CLI commands:

# Initialize a task plan
tt plan --init

Edit tasks.toml:

[[tasks]]
id = "login-api"
description = "Implement the user login API endpoint"
agent = "worker-1"
status = "pending"

[[tasks]]
id = "login-tests"
description = "Write tests for the login API"
agent = "worker-2"
status = "pending"
parent = "login-api"

Sync to Redis:

tt sync push

Now your tasks are version-controlled and can be reviewed in PRs. See tt plan for more.

Next Steps

Your First Town

Let’s build a real workflow: coordinating agents to implement and review a feature.

The Scenario

You want to:

  1. Have one agent implement a feature
  2. Have another agent write tests
  3. Have a third agent review both

Project Setup

# Create and enter your project
mkdir feature-builder && cd feature-builder

# Initialize git (optional but recommended)
git init

# Initialize tinytown
tt init --name feature-builder

Understanding the Config

Open tinytown.toml:

name = "feature-builder"
default_cli = "claude"
max_agents = 10

[redis]
use_socket = true
socket_path = "redis.sock"

Key settings:

  • use_socket = true β€” Uses Unix socket for ~10x faster communication than TCP
  • default_cli β€” Agent CLI when --model isn’t specified
  • max_agents β€” Prevents accidentally spawning too many

Create Your Team

# The implementer
tt spawn dev --model claude

# The tester  
tt spawn tester --model auggie

# The reviewer
tt spawn reviewer --model codex

Check your team:

tt list
Agents:
  dev (550e8400-...) - Starting
  tester (6ba7b810-...) - Starting
  reviewer (6ba7b811-...) - Starting

Assign the Work

# Implementation task
tt assign dev "Create a REST API endpoint POST /users that:
- Accepts {email, password, name}
- Validates email format
- Hashes password with bcrypt
- Returns {id, email, name, created_at}"

# Testing task
tt assign tester "Write integration tests for POST /users:
- Test successful creation
- Test duplicate email rejection
- Test invalid email format
- Test missing required fields"

# Review task
tt assign reviewer "Review the implementation and tests when ready:
- Check for security issues
- Verify error handling
- Ensure tests cover edge cases"

Monitor Progress

# See overall status
tt status

# Watch for changes (re-run periodically)
watch -n 5 tt status

What Happens Behind the Scenes

  1. Task Creation: Each tt assign creates a Task with a unique ID
  2. Message Sending: A Message of type TaskAssign is sent to the agent’s inbox
  3. Redis Queue: Messages are stored in town-isolated Redis lists (tt:<town>:inbox:<agent-id>)
  4. Agent Pickup: Agents receive messages via BLPOP (blocking pop)
  5. State Tracking: Agent and task states are stored in Redis

Connecting Real Agents

Tinytown creates the infrastructure, but you need to connect actual AI agents. The spawn command prepares the configuration; you then run the agent:

# Example: Run Claude CLI pointing at your town
cd agents/dev
claude --print  # Uses the model command from config

Or with Augment:

cd agents/tester
augment  # Uses the model command from config

Cleanup

When you’re done:

# Stop the town's agents
tt stop

# Or just Ctrl+C if running `tt start`

Next Steps

Core Concepts Overview

Tinytown has exactly 5 core types. That’s it. No more, no less.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           Your Application              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”
        β”‚    Town      β”‚  ← Orchestrator
        β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β”‚   Channel (Redis)       β”‚  ← Message passing
        β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β–Ό          β–Ό              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Agent  β”‚ β”‚ Agent  β”‚ .. β”‚ Agent  β”‚  ← Workers
β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
    β”‚          β”‚              β”‚
    β–Ό          β–Ό              β–Ό
  Tasks      Tasks          Tasks      ← Work units

The 5 Core Types

TypeWhat It IsRedis Key Pattern
TownThe orchestrator that manages everythingN/A (local)
AgentA worker that executes taskstt:<town>:agent:<id>
TaskA unit of work with lifecyclett:<town>:task:<id>
MessageCommunication between agentsTransient
ChannelRedis-based message transporttt:<town>:inbox:<id>

How They Work Together

1. Town Orchestrates

The Town is your control center. It:

  • Starts and manages Redis
  • Spawns and tracks agents
  • Provides the API for coordination
tt init my-project
tt status

2. Agents Execute

Agents are workers. Each agent has:

  • A unique ID
  • A name (human-readable)
  • A model (claude, auggie, codex, etc.)
  • A state (starting, idle, working, stopped)
tt spawn worker-1 --model claude
tt list

3. Tasks Represent Work

Tasks are what agents work on. Each task has:

  • A description
  • A state (pending β†’ assigned β†’ running β†’ completed/failed)
  • An assigned agent
tt assign worker-1 "Implement the login API"
tt tasks

4. Messages Coordinate

Messages are how agents communicate. They carry:

  • Sender and recipient
  • Message type (TaskAssign, TaskDone, StatusRequest, etc.)
  • Priority (Low, Normal, High, Urgent)
tt send worker-1 "Please update the README"
tt send worker-1 --urgent "Critical bug in production!"

5. Channel Transports

The Channel is the Redis connection that moves messages. It provides:

  • Priority queues (urgent messages go first)
  • Blocking receive (agents wait efficiently)
  • State persistence (survives restarts)

Mental Model

Think of it like a small company:

TinytownCompany Analogy
TownThe office building
AgentAn employee
TaskA work ticket
MessageAn email/Slack message
ChannelThe email/Slack system

What’s NOT in Tinytown

Deliberately excluded to keep things simple:

  • ❌ Workflow DAGs β€” Just assign tasks directly
  • ❌ Recovery daemons β€” Redis persistence handles crashes
  • ❌ Multi-tier databases β€” One Redis instance
  • ❌ Complex agent hierarchies β€” All agents are peers

If you need these, you can build them on top of Tinytown’s primitives, or use a more complex system like Gastown.

Next Steps

Deep dive into each type:

Towns

A Town is your orchestration workspace. It’s the top-level container that manages Redis, agents, and coordination.

What a Town Contains

my-project/           # Town root
β”œβ”€β”€ tinytown.toml     # Configuration
β”œβ”€β”€ .gitignore        # Auto-updated to exclude .tt/
└── .tt/              # Runtime artifacts (gitignored)
    β”œβ”€β”€ redis.sock    # Unix socket (when running)
    β”œβ”€β”€ redis.pid     # Redis process ID
    β”œβ”€β”€ redis.aof     # Redis persistence (if enabled)
    β”œβ”€β”€ agents/       # Agent working directories
    β”œβ”€β”€ logs/         # Activity logs
    └── tasks/        # Task storage

All runtime artifacts are stored under .tt/ which is automatically added to .gitignore during tt init. This keeps your repository clean and prevents accidental commits of logs, sockets, and other temporary files.

Creating a Town

CLI

tt init --name my-project

Rust API

use tinytown::{Town, Result};

#[tokio::main]
async fn main() -> Result<()> {
    // Initialize a new town
    let town = Town::init("./my-project", "my-project").await?;
    
    // Town is now running with Redis started
    Ok(())
}

Connecting to an Existing Town

#![allow(unused)]
fn main() {
// Connect to existing town (starts Redis if needed)
let town = Town::connect("./my-project").await?;
}

Town Configuration

The tinytown.toml file:

name = "my-project"
default_cli = "claude"
max_agents = 10

[redis]
use_socket = true
socket_path = ".tt/redis.sock"
host = "127.0.0.1"
port = 6379

[agent_clis.claude]
name = "claude"
command = "claude --print"

[agent_clis.auggie]
name = "auggie"
command = "augment"

Configuration Options

OptionDefaultDescription
nameDirectory nameHuman-readable town name
redis.use_sockettrueUse Unix socket (faster) vs TCP
redis.socket_path.tt/redis.sockSocket file path (under .tt/)
redis.host127.0.0.1TCP host (if not using socket)
redis.port6379TCP port (if not using socket)
default_cliclaudeDefault CLI for new agents
max_agents10Maximum concurrent agents

Town Lifecycle

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   init()    β”‚ ── Creates directories, config, starts Redis
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   running   β”‚ ── Agents can be spawned, tasks assigned
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    drop     β”‚ ── Redis stopped, cleanup
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Town Methods

#![allow(unused)]
fn main() {
// Spawn a new agent
let agent = town.spawn_agent("worker-1", "claude").await?;

// Get handle to existing agent
let agent = town.agent("worker-1").await?;

// List all agents
let agents = town.list_agents().await;

// Access the channel directly
let channel = town.channel();

// Get configuration
let config = town.config();

// Get root directory
let root = town.root();
}

Redis Management

Tinytown automatically manages Redis:

  1. On init(): Starts redis-server with Unix socket
  2. On connect(): Connects to existing, or starts if needed
  3. On drop: Stops Redis gracefully

Unix Socket vs TCP

Unix sockets are ~10x faster for local communication:

ModeLatencyUse Case
Unix Socket~0.1msLocal development (default)
TCP~1msRemote Redis, Docker

To use TCP instead:

[redis]
use_socket = false
host = "redis.example.com"
port = 6379

Agents

An Agent is a worker that executes tasks. Agents can be AI models (Claude, Auggie, Codex) or custom processes.

Agent Properties

PropertyTypeDescription
idUUIDUnique identifier
nameStringHuman-readable name
agent_typeEnumWorker or Supervisor
stateEnumCurrent lifecycle state
cliStringCLI being used (claude, auggie, etc.)
current_taskOptionTask being worked on
created_atDateTimeWhen agent was created
last_heartbeatDateTimeLast activity timestamp
tasks_completedu64Count of completed tasks

Agent States

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Starting β”‚ ── Agent is initializing
β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Idle    β”‚ ◄──►│  Working  β”‚ ── Can accept work / Executing task
β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Paused   β”‚     β”‚   Error   β”‚ ── Temporarily paused / Something went wrong
β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚
      β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Stopped  β”‚ ── Agent has terminated
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Creating Agents

CLI

# With default model
tt spawn worker-1

# With specific model
tt spawn worker-1 --model claude
tt spawn worker-2 --model auggie
tt spawn reviewer --model codex

Rust API

#![allow(unused)]
fn main() {
use tinytown::{Town, Agent, AgentType};

let town = Town::connect(".").await?;

// Spawn returns a handle
let handle = town.spawn_agent("worker-1", "claude").await?;

// Handle provides operations
let id = handle.id();
let state = handle.state().await?;
let inbox_len = handle.inbox_len().await?;
}

Built-in Models

Tinytown comes with presets for popular AI coding agents:

ModelCommandAgent
claudeclaude --printAnthropic Claude
auggieaugmentAugment Code
codexcodexOpenAI Codex
geminigeminiGoogle Gemini
copilotgh copilotGitHub Copilot
aideraiderAider
cursorcursorCursor

Agent Types

Worker (Default)

Workers execute tasks assigned to them:

#![allow(unused)]
fn main() {
let agent = Agent::new("worker-1", "claude", AgentType::Worker);
}

Supervisor

A special agent that coordinates workers:

#![allow(unused)]
fn main() {
let supervisor = Agent::supervisor("coordinator");
}

The supervisor has a well-known ID and can send messages to all workers.

Working with Agent Handles

#![allow(unused)]
fn main() {
let handle = town.spawn_agent("worker-1", "claude").await?;

// Assign a task
let task_id = handle.assign(Task::new("Build the API")).await?;

// Send a message
handle.send(MessageType::StatusRequest).await?;

// Check inbox
let pending = handle.inbox_len().await?;

// Get current state
if let Some(agent) = handle.state().await? {
    println!("State: {:?}", agent.state);
    println!("Current task: {:?}", agent.current_task);
}

// Wait for completion
handle.wait().await?;
}

Agent Storage in Redis

Agents are persisted in Redis using town-isolated keys:

tt:<town_name>:agent:<uuid>  β†’  JSON serialized Agent struct
tt:<town_name>:inbox:<uuid>  β†’  List of pending messages

This town-isolated format allows multiple Tinytown projects to share the same Redis instance without key conflicts. See tt migrate for upgrading from older key formats.

This means:

  • Agent state survives Redis restarts (with persistence)
  • Multiple processes can coordinate via the same town
  • Multiple towns can share the same Redis instance
  • You can inspect state with redis-cli

Tasks

A Task is a unit of work that can be assigned to an agent.

Task Properties

PropertyTypeDescription
idUUIDUnique identifier
descriptionStringWhat needs to be done
stateEnumCurrent lifecycle state
assigned_toOptionAgent working on this
created_atDateTimeWhen created
updated_atDateTimeLast modification
completed_atOptionWhen finished
resultOptionOutput or error message
parent_idOptionFor hierarchical tasks
tagsVecLabels for filtering

Task States

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Pending β”‚ ── Created, waiting for assignment
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
     β”‚ assign()
     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Assigned β”‚ ── Given to an agent
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
     β”‚ start()
     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Running β”‚ ── Agent is working on it
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
     β”‚
     β”œβ”€β”€β”€ complete() ──► β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
     β”‚                   β”‚ Completed β”‚ βœ“
     β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚
     β”œβ”€β”€β”€ fail() ──────► β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
     β”‚                   β”‚ Failed β”‚ βœ—
     β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚
     └─── cancel() ────► β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                         β”‚ Cancelled β”‚ ⊘
                         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Creating Tasks

CLI

# Assign a task directly to an agent
tt assign worker-1 "Implement user authentication"

# Check pending tasks
tt tasks

Define tasks in a file for version control and batch assignment:

[[tasks]]
id = "auth-api"
description = "Implement user authentication"
agent = "backend"
status = "pending"
tags = ["auth", "api"]

[[tasks]]
id = "auth-tests"
description = "Write tests for auth API"
agent = "tester"
status = "pending"
parent = "auth-api"

Then sync to Redis:

tt sync push

See tt plan for the full task DSL.

Task Lifecycle

Tasks move through states automatically as agents work on them:

  1. Pending β†’ Created, waiting for assignment
  2. Assigned β†’ Given to an agent via tt assign
  3. Running β†’ Agent is actively working
  4. Completed/Failed/Cancelled β†’ Terminal states

Check task state with:

tt tasks

Or inspect directly in Redis:

redis-cli -s ./redis.sock GET "tt:<town_name>:task:<uuid>"

Task Storage in Redis

Tasks are stored as JSON using town-isolated keys:

tt:<town_name>:task:<uuid>  β†’  JSON serialized Task struct

This allows multiple towns to share the same Redis instance. You can inspect tasks directly:

redis-cli -s ./redis.sock GET "tt:<town_name>:task:550e8400-e29b-41d4-a716-446655440000"

See tt migrate for upgrading from older key formats.

Hierarchical Tasks

Create parent-child relationships in tasks.toml:

[[tasks]]
id = "user-system"
description = "User Management System"
agent = "architect"
status = "pending"

[[tasks]]
id = "signup"
description = "User signup flow"
parent = "user-system"
agent = "backend"
status = "pending"

[[tasks]]
id = "login"
description = "User login flow"
parent = "user-system"
agent = "backend"
status = "pending"

Task Tags

Use tags to categorize and filter:

[[tasks]]
id = "fix-xss"
description = "Fix XSS vulnerability"
status = "pending"
tags = ["security", "bug", "P0"]

Comparison with Gastown Beads

FeatureTinytown TasksGastown Beads
StorageRedis (JSON)Dolt SQL
HierarchyOptional parent_idFull graph
MetadataTags arrayFull schema
PersistenceRedis persistenceGit-backed
ComplexitySimpleComplex

Tinytown tasks are intentionally simpler. If you need Gastown’s bead features (dependency graphs, git versioning, SQL queries), consider using Gastown or building on top of Tinytown.

Messages

Messages are how agents communicate. They’re the envelopes that carry task assignments, status updates, and coordination signals.

Message Properties

PropertyTypeDescription
idUUIDUnique message identifier
fromAgentIdSender
toAgentIdRecipient
msg_typeEnumType and payload
priorityEnumProcessing order
created_atDateTimeWhen sent
correlation_idOptionFor request/response

Message Types

#![allow(unused)]
fn main() {
pub enum MessageType {
    // Semantic message types (for inter-agent communication)
    Task { description: String },           // Actionable work request
    Query { question: String },             // Question expecting response
    Informational { summary: String },      // FYI/context update
    Confirmation { ack_type: ConfirmationType }, // Receipt/acknowledgment

    // Task management
    TaskAssign { task_id: String },
    TaskDone { task_id: String, result: String },
    TaskFailed { task_id: String, error: String },

    // Status
    StatusRequest,
    StatusResponse { state: String, current_task: Option<String> },

    // Lifecycle
    Ping,
    Pong,
    Shutdown,

    // Extensibility
    Custom { kind: String, payload: String },
}

pub enum ConfirmationType {
    Received,              // Message was received
    Acknowledged,          // Message was acknowledged
    Thanks,                // Message expressing thanks
    Approved,              // Approval confirmation
    Rejected { reason: String }, // Rejection with reason
}
}

Semantic Message Classification

Messages are classified as either actionable or informational:

Actionable (require work)Informational (context only)
Task, Query, TaskAssignInformational, Confirmation
StatusRequest, PingTaskDone, TaskFailed
Shutdown, CustomStatusResponse, Pong

Use msg.is_actionable() and msg.is_informational_or_confirmation() helpers to classify.


## Priority Levels

Messages are processed by priority:

| Priority | Behavior |
|----------|----------|
| `Urgent` | Goes to front of queue, interrupt current work |
| `High` | Goes to front of queue |
| `Normal` | Goes to back of queue (default) |
| `Low` | Goes to back, processed when idle |

```rust
let msg = Message::new(from, to, MessageType::Shutdown)
    .with_priority(Priority::Urgent);

Creating Messages

#![allow(unused)]
fn main() {
use tinytown::{Message, MessageType, AgentId, Priority};

// Basic message
let msg = Message::new(
    AgentId::supervisor(),  // from
    worker_id,              // to
    MessageType::TaskAssign { task_id: "abc123".into() }
);

// With priority
let urgent = Message::new(from, to, MessageType::Shutdown)
    .with_priority(Priority::Urgent);

// With correlation (for request/response)
let request = Message::new(from, to, MessageType::StatusRequest);
let response = Message::new(to, from, MessageType::StatusResponse { 
    state: "working".into(),
    current_task: Some("task-123".into())
}).with_correlation(request.id);
}

Sending Messages

Via Channel:

#![allow(unused)]
fn main() {
let channel = town.channel();
channel.send(&message).await?;
}

Via AgentHandle:

#![allow(unused)]
fn main() {
let handle = town.agent("worker-1").await?;
handle.send(MessageType::StatusRequest).await?;
}

Receiving Messages

#![allow(unused)]
fn main() {
use std::time::Duration;

// Blocking receive (waits up to timeout)
if let Some(msg) = channel.receive(agent_id, Duration::from_secs(30)).await? {
    match msg.msg_type {
        MessageType::TaskAssign { task_id } => {
            println!("Got task: {}", task_id);
        }
        MessageType::Shutdown => {
            println!("Shutting down");
            break;
        }
        _ => {}
    }
}

// Non-blocking receive
if let Some(msg) = channel.try_receive(agent_id).await? {
    // Process message
}
}

Broadcasting

Send to all agents:

#![allow(unused)]
fn main() {
let broadcast = Message::new(
    AgentId::supervisor(),
    AgentId::supervisor(),  // Placeholder, broadcast ignores this
    MessageType::Shutdown
);
channel.broadcast(&broadcast).await?;
}

Custom Messages

For application-specific communication:

#![allow(unused)]
fn main() {
// Define your payload as JSON
let payload = serde_json::json!({
    "pr_url": "https://github.com/...",
    "files_changed": ["src/auth.rs", "tests/auth_test.rs"]
});

let msg = Message::new(from, to, MessageType::Custom {
    kind: "pr_ready".into(),
    payload: payload.to_string()
});
}

Message Flow Example

Supervisor                    Worker
    β”‚                           β”‚
    β”‚  TaskAssign{task-123}     β”‚
    β”‚ ─────────────────────────►│
    β”‚                           β”‚
    β”‚                           β”‚ (working...)
    β”‚                           β”‚
    β”‚    TaskDone{task-123}     β”‚
    │◄───────────────────────── β”‚
    β”‚                           β”‚

Comparison with Gastown Mail

FeatureTinytown MessagesGastown Mail
TransportRedis listsBeads (git-backed)
PersistenceRedis persistenceGit commits
Priority4 levelsYes
RoutingDirect to inboxComplex routing
RecoveryRedis replaysEvent replay

Tinytown messages are ephemeral by default (in Redis memory). Enable Redis persistence for durability.

Channels

The Channel is Tinytown’s message transport layer. It’s a thin wrapper around Redis that provides queues, pub/sub, and state storage.

Why Redis?

Redis is perfect for agent orchestration:

FeatureBenefit
Unix socketsSub-millisecond latency
ListsPerfect for message queues
BLPOPEfficient blocking receive
Pub/SubBroadcast to all agents
PersistenceSurvives crashes
SimpleNo complex setup

Channel Operations

Send a Message

#![allow(unused)]
fn main() {
channel.send(&message).await?;
}

Messages go to the recipient’s inbox (tt:<town>:inbox:<agent-id>).

Priority handling:

  • Urgent / High β†’ LPUSH (front of queue)
  • Normal / Low β†’ RPUSH (back of queue)

Receive a Message

#![allow(unused)]
fn main() {
// Blocking (waits up to timeout)
let msg = channel.receive(agent_id, Duration::from_secs(30)).await?;

// Non-blocking
let msg = channel.try_receive(agent_id).await?;
}

Uses BLPOP for efficient waiting without polling.

Check Inbox Length

#![allow(unused)]
fn main() {
let pending = channel.inbox_len(agent_id).await?;
println!("{} messages waiting", pending);
}

Broadcast

#![allow(unused)]
fn main() {
channel.broadcast(&message).await?;
}

Uses Redis Pub/Sub (PUBLISH tt:broadcast).

State Storage

The channel also stores agent and task state:

Agent State

#![allow(unused)]
fn main() {
// Store
channel.set_agent_state(&agent).await?;

// Retrieve
let agent = channel.get_agent_state(agent_id).await?;
}

Stored at: tt:<town>:agent:<uuid>

Task State

#![allow(unused)]
fn main() {
// Store
channel.set_task(&task).await?;

// Retrieve
let task = channel.get_task(task_id).await?;
}

Stored at: tt:<town>:task:<uuid>

Redis Key Patterns

Keys are town-isolated to allow multiple towns to share the same Redis instance:

PatternTypePurpose
tt:<town>:inbox:<uuid>ListAgent message queue
tt:<town>:agent:<uuid>StringAgent state (JSON)
tt:<town>:task:<uuid>StringTask state (JSON)
tt:broadcastPub/SubBroadcast channel

See tt migrate for upgrading from older key formats.

Direct Redis Access

Sometimes you want to query Redis directly:

# Connect to town's Redis
redis-cli -s ./redis.sock

# List all agent inboxes for your town
KEYS tt:<town_name>:inbox:*

# Check inbox length
LLEN tt:<town_name>:inbox:550e8400-e29b-41d4-a716-446655440000

# View agent state
GET tt:<town_name>:agent:550e8400-e29b-41d4-a716-446655440000

# Monitor all messages
MONITOR

Performance

Unix socket performance is excellent:

OperationLatency
Send message~0.1ms
Receive (cached)~0.1ms
State get/set~0.1ms
TCP equivalent~1-2ms

For local development, this means near-instant coordination.

Persistence

By default, Redis runs in-memory only. For durability:

Option 1: RDB Snapshots

redis-cli -s ./redis.sock CONFIG SET save "60 1"

Saves every 60 seconds if at least 1 key changed.

Option 2: AOF (Append Only File)

redis-cli -s ./redis.sock CONFIG SET appendonly yes

Logs every write for full durability.

Creating a Channel

Usually you don’t create channels directlyβ€”the Town does it:

#![allow(unused)]
fn main() {
// Get channel from town
let channel = town.channel();

// Or create manually (advanced)
use redis::aio::ConnectionManager;
let client = redis::Client::open("unix:///path/to/redis.sock")?;
let conn = ConnectionManager::new(client).await?;
let channel = Channel::new(conn);
}

Agent Coordination

How agents work together and decide when tasks are complete.

The Simple Model

Tinytown keeps coordination simple:

  1. Conductor orchestrates (spawns agents, assigns tasks)
  2. Workers do the work
  3. Reviewer decides when work is done
  4. Conductor monitors and coordinates handoffs

The Reviewer Pattern

Always include a reviewer agent. They’re your quality gate:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     work      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Conductor β”‚ ────────────► β”‚   Worker   β”‚
β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜               β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
      β”‚                            β”‚
      β”‚                            β”‚ completes
      β”‚                            β–Ό
      β”‚    review request    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚ ────────────────────►│  Reviewer  β”‚
      β”‚                      β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
      β”‚                            β”‚
      β”‚β—„β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚      approve / reject
      β”‚
      β–Ό
   Done (or assign fixes)

Why a Reviewer?

Without a reviewer, who decides β€œdone”?

ApproachProblem
Worker decidesβ€œI’m done” but is it good?
Conductor decidesConductor may not understand domain
User decidesUser has to check everything
Reviewer decidesβœ“ Separation of concerns

The reviewer pattern is used everywhere: code review, QA, editing. It works.

How It Works in Practice

1. Conductor Spawns Team

tt spawn backend
tt spawn frontend
tt spawn reviewer  # Always include!

2. Workers Work

tt assign backend "Build the API"
tt assign frontend "Build the UI"

3. Conductor Requests Review

When tt status shows workers are idle:

tt assign reviewer "Review the API implementation. Check security, error handling, tests. Approve or list what needs fixing."

4. Reviewer Responds

The reviewer either:

  • Approves: β€œLGTM, API is solid”
  • Requests changes: β€œPassword hashing uses weak algorithm, fix needed”

5. Conductor Acts

  • If approved β†’ task is done
  • If changes needed β†’ tt assign backend "Fix: use bcrypt instead of md5"

Messages Between Agents

Agents can send messages directly via their inboxes:

#![allow(unused)]
fn main() {
// In code (for custom integrations)
let msg = Message::new(worker_id, reviewer_id, MessageType::Custom {
    kind: "ready_for_review".into(),
    payload: r#"{"files": ["src/api.rs"]}"#.into(),
});
channel.send(&msg).await?;
}

But for simplicity, the conductor handles coordination. Agents don’t need to message each other directlyβ€”the conductor assigns review tasks when workers are done.

Keeping It Simple

Tinytown deliberately avoids:

  • ❌ Complex state machines
  • ❌ Automatic dependency resolution
  • ❌ Event-driven triggers

Instead:

  • βœ… Conductor checks tt status
  • βœ… Conductor assigns next task
  • βœ… Reviewer is the quality gate

This is explicit and easy to understand. You always know what’s happening.

Comparison with Gastown

AspectGastownTinytown
CoordinationMayor + Witness + HooksConductor + Reviewer
CompletionComplex bead statesReviewer approves
AutomationEvent-drivenConductor-driven
ComplexityHighLow

Gastown automates more but is harder to understand. Tinytown is explicit.

Mission Mode

Mission Mode enables autonomous, dependency-aware orchestration of multiple GitHub issues with automatic PR/CI monitoring.

Overview

While regular Tinytown tasks are great for individual work items, mission mode is designed for larger objectives that span multiple issues, require dependency tracking, and need ongoing monitoring of external events like CI status and code reviews.

Think of a mission as a β€œproject manager” that:

  • Accepts multiple GitHub issues as objectives
  • Builds a dependency-aware execution plan (DAG)
  • Delegates work to best-fit agents automatically
  • Monitors PRs, CI, and review status
  • Persists state for restart/resume capability

Core Concepts

MissionRun

The top-level orchestration record. A MissionRun owns:

  • Objectives: GitHub issues or documents to complete
  • Work Items: Individual tasks extracted from objectives
  • Watch Items: Monitoring tasks for PRs/CI
  • Policy: Execution rules (parallelism, review gates, etc.)

Mission States

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Planning β”‚ ── Compiling work graph from objectives
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
     β”‚
     β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Running  β”‚ ── Active execution
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
     β”‚
     β”œβ”€β”€β–Ί β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
     β”‚    β”‚ Blocked  β”‚ ── Waiting on external event
     β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚
     β”œβ”€β”€β–Ί β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
     β”‚    β”‚ Completed β”‚ βœ“ All objectives done
     β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚
     └──► β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Failed β”‚ βœ— Unrecoverable error
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Work Items

Individual units of work in the mission DAG. Each work item:

  • Has a status: pending β†’ ready β†’ assigned β†’ running β†’ done
  • May depend on other work items
  • Gets assigned to an agent based on role fit

Watch Items

Scheduled monitoring tasks that poll for external events:

  • PR Checks: CI pass/fail status
  • Reviews: Human review comments
  • Bugbot: Automated security reports
  • Mergeability: Conflict and merge status

Dependency Detection

The mission compiler parses issue bodies for dependency markers:

<!-- In your GitHub issue body -->
This feature depends on #42.
After #41 is complete, we can start this.
Blocked by #40.

Supported patterns:

  • depends on #N
  • after #N
  • blocked by #N
  • requires #N

Mission Policy

Control execution behavior with policy settings:

SettingDefaultDescription
max_parallel_items2Max concurrent work items
reviewer_requiredtrueRequire review before merge
auto_mergefalseMerge PRs automatically on approval
watch_interval_secs180How often to poll PR/CI status

Example Workflow

# Start a mission from multiple issues
tt mission start --issue 23 --issue 24 --issue 25

# Check mission status
tt mission status

# View detailed work items
tt mission status --work

# Stop a mission gracefully
tt mission stop <run-id>

# Resume a stopped mission
tt mission resume <run-id>

Scheduler Loop

The scheduler runs every 30 seconds (configurable) and:

  1. Loads active missions from Redis
  2. Checks due watch items, executes triggers
  3. Promotes pending work items to ready when dependencies satisfied
  4. Matches ready items to idle agents by role fit
  5. Enforces reviewer gates before advancing
  6. Marks mission completed when no items remain

Agent Routing

Work items are matched to agents using role-fit scoring:

  1. Exact match: owner_role: "backend" β†’ agent with backend role
  2. Generic fallback: Any idle worker
  3. Load balancing: Avoid assigning too many items to one agent
  4. Reviewer reservation: Keep reviewer available for gates

Redis Storage

Missions persist in Redis with these keys:

tt:{town}:mission:{run_id}          # MissionRun metadata
tt:{town}:mission:{run_id}:work     # WorkItem collection
tt:{town}:mission:{run_id}:watch    # WatchItem collection
tt:{town}:mission:{run_id}:events   # Activity log (last 100)
tt:{town}:mission:active            # Set of active MissionIds

See Also

Tutorial: Single Agent Workflow

Let’s build a complete workflow with one agent doing a coding task.

What We’ll Build

A simple system that:

  1. Creates a coding task
  2. Assigns it to an agent
  3. Waits for completion
  4. Reports the result

Setup

mkdir single-agent-demo && cd single-agent-demo
tt init --name demo

The Code

Create main.rs:

use tinytown::{Town, Task, Result};
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<()> {
    // Connect to town
    let town = Town::connect(".").await?;
    
    // Create an agent
    let agent = town.spawn_agent("coder", "claude").await?;
    println!("πŸ€– Spawned agent: {}", agent.id());
    
    // Create a task
    let task = Task::new(
        "Create a Rust function that calculates fibonacci numbers recursively"
    );
    println!("πŸ“‹ Created task: {}", task.id);
    
    // Assign to agent
    let task_id = agent.assign(task).await?;
    println!("βœ… Assigned task {} to coder", task_id);
    
    // Check status periodically
    loop {
        tokio::time::sleep(Duration::from_secs(5)).await;
        
        if let Some(state) = agent.state().await? {
            println!("   Agent state: {:?}", state.state);
            
            match state.state {
                tinytown::AgentState::Idle => {
                    println!("πŸŽ‰ Agent completed work!");
                    break;
                }
                tinytown::AgentState::Error => {
                    println!("❌ Agent encountered error");
                    break;
                }
                _ => continue,
            }
        }
    }
    
    // Get task result
    if let Some(task) = town.channel().get_task(task_id).await? {
        println!("\nπŸ“Š Task result:");
        println!("   State: {:?}", task.state);
        if let Some(result) = task.result {
            println!("   Output: {}", result);
        }
    }
    
    Ok(())
}

Running It

# In terminal 1: Keep the town running
tt start

# In terminal 2: Run your code
cargo run

What Happens

  1. Town connects to the existing town (and its Redis)
  2. Agent spawns with state Starting β†’ Idle
  3. Task creates with state Pending
  4. Assignment sends a TaskAssign message to agent’s inbox
  5. Agent receives the message (in a real setup, Claude would process it)
  6. Polling checks agent state every 5 seconds
  7. Completion when agent returns to Idle

The Message Flow

Your Code                     Redis                              Agent
    β”‚                           β”‚                                  β”‚
    β”‚  spawn_agent()            β”‚                                  β”‚
    β”‚ ─────────────────────────►│                                  β”‚
    β”‚                           β”‚  SET tt:<town>:agent:xxx         β”‚
    β”‚                           β”‚ ─────────────────────────────────│
    β”‚                           β”‚                                  β”‚
    β”‚  assign(task)             β”‚                                  β”‚
    β”‚ ─────────────────────────►│                                  β”‚
    β”‚                           β”‚  SET tt:<town>:task:yyy          β”‚
    β”‚                           β”‚  RPUSH tt:<town>:inbox:xxx       β”‚
    β”‚                           β”‚ ─────────────────────────────────│
    β”‚                           β”‚                                  β”‚
    β”‚                           β”‚  BLPOP tt:<town>:inbox:xxx       β”‚
    β”‚                           │◄─────────────────────────────────│
    β”‚                           β”‚                                  β”‚
    β”‚  state()                  β”‚                                  β”‚
    β”‚ ─────────────────────────►│                                  β”‚
    β”‚                           β”‚  GET tt:<town>:agent:xxx         β”‚
    │◄───────────────────────── β”‚                                  β”‚

Simulating the Agent

In a real workflow, Claude (or another AI) receives the task. For testing, you can simulate completion:

# In redis-cli (replace <town> with your town name)
redis-cli -s ./redis.sock

# Get the inbox message
LPOP tt:<town>:inbox:550e8400-e29b-41d4-a716-446655440000

# Update agent state to idle
# (In practice, the agent process does this)

Key Takeaways

  1. Towns manage Redis β€” You don’t need to start it manually
  2. Agents are stateful β€” Their state persists in Redis
  3. Tasks are tracked β€” Full lifecycle from pending to complete
  4. Messages are reliable β€” Redis lists ensure delivery

Next Steps

Tutorial: Multi-Agent Coordination

Let’s coordinate multiple agents working together on a feature.

The Scenario

We’ll build a system where:

  1. Architect designs the API
  2. Developer implements it
  3. Tester writes tests
  4. Reviewer reviews everything

Setup

mkdir multi-agent-demo && cd multi-agent-demo
tt init --name multi-demo

Spawning the Team

tt spawn architect --model claude
tt spawn developer --model auggie
tt spawn tester --model codex
tt spawn reviewer --model claude

Check your team:

tt list

Sequential Pipeline with tasks.toml

Define your workflow in tasks.toml:

[meta]
description = "Auth API Pipeline"

[[tasks]]
id = "design"
description = "Design a REST API for user authentication with JWT tokens"
agent = "architect"
status = "pending"

[[tasks]]
id = "implement"
description = "Implement the auth API from the architect's design"
agent = "developer"
status = "pending"
parent = "design"

[[tasks]]
id = "test"
description = "Write comprehensive tests for the auth API"
agent = "tester"
status = "pending"
parent = "implement"

Then sync to Redis and let the conductor orchestrate:

tt sync push
tt conductor

Parallel Execution

When tasks are independent, assign them all at once:

# Spawn agents
tt spawn frontend --model claude
tt spawn backend --model auggie
tt spawn docs --model codex

# Assign tasks (they run in parallel)
tt assign frontend "Build the login UI"
tt assign backend "Build the auth API"
tt assign docs "Write API documentation"

# Monitor progress
tt status

Fan-Out / Fan-In

Use the conductor for complex workflows:

# Spawn workers
tt spawn worker-1 --model claude
tt spawn worker-2 --model claude
tt spawn worker-3 --model claude
tt spawn reviewer --model claude

# Assign work to all workers
tt assign worker-1 "Implement module A"
tt assign worker-2 "Implement module B"
tt assign worker-3 "Implement module C"

# Monitor until all complete
tt status

# Then aggregate with reviewer
tt assign reviewer "Review modules A, B, and C for consistency"

Agent-to-Agent Communication

Agents can send messages to each other:

# Send a message to another agent
tt send reviewer "Auth API implementation complete. Ready for review."

# Send an urgent message
tt send reviewer --urgent "Critical bug found in module A!"

Comparison with Gastown

PatternTinytownGastown
Sequentialwait_for_idle() loopConvoy + Beads events
Paralleltokio::join!Mayor distributes
Fan-out/inManual coordinationConvoy tracking
MessagingDirect channel.send()Mail protocol

Tinytown is more explicitβ€”you write the coordination logic. Gastown abstracts it with Convoys and the Mayor. Choose based on your needs.

Next Steps

Tutorial: Task Pipelines

Build structured workflows with task dependencies and hierarchies.

What We’ll Build

A code review pipeline:

  1. Developer writes code
  2. Linter checks style
  3. Tester writes tests
  4. Reviewer approves
  5. Merger deploys

Pipeline with tasks.toml

Define your pipeline in tasks.toml:

[meta]
description = "User Profile Feature Pipeline"

# Epic (parent task)
[[tasks]]
id = "profile-epic"
description = "Implement user profile feature"
status = "pending"
tags = ["epic", "q1-2024"]

# Subtasks under the epic
[[tasks]]
id = "design"
description = "Design profile API schema"
agent = "architect"
status = "pending"
parent = "profile-epic"
tags = ["design"]

[[tasks]]
id = "implement"
description = "Implement profile endpoints"
agent = "developer"
status = "pending"
parent = "profile-epic"
tags = ["backend"]

[[tasks]]
id = "test"
description = "Write profile API tests"
agent = "tester"
status = "pending"
parent = "profile-epic"
tags = ["testing"]

[[tasks]]
id = "review"
description = "Review profile implementation"
agent = "reviewer"
status = "pending"
parent = "profile-epic"
tags = ["review"]

Run the pipeline:

# Initialize the plan
tt plan --init

# Spawn the team
tt spawn architect --model claude
tt spawn developer --model auggie
tt spawn tester --model codex
tt spawn reviewer --model claude

# Push tasks to Redis
tt sync push

# Start the conductor to orchestrate
tt conductor

Sequential Pipeline via CLI

For simple sequential workflows:

# Stage 1: Design
tt assign architect "Design the feature architecture"
# Wait for completion, then...

# Stage 2: Implement
tt assign developer "Implement the feature"
# Wait for completion, then...

# Stage 3: Test
tt assign tester "Write tests for the feature"
# Wait for completion, then...

# Stage 4: Review
tt assign reviewer "Review the implementation"

Use tt status to monitor progress between stages.

Multi-Stage Pipeline Example

A complete tasks.toml for a CI/CD-like pipeline:

[meta]
description = "Code Review Pipeline"
default_agent = "developer"

[[tasks]]
id = "lint"
description = "Run linting on src/"
agent = "linter"
status = "pending"

[[tasks]]
id = "build"
description = "Build the project"
agent = "builder"
status = "pending"
parent = "lint"

[[tasks]]
id = "test"
description = "Run test suite"
agent = "tester"
status = "pending"
parent = "build"

[[tasks]]
id = "review"
description = "Code review"
agent = "reviewer"
status = "pending"
parent = "test"

[[tasks]]
id = "deploy"
description = "Deploy to staging"
agent = "deployer"
status = "pending"
parent = "review"

Best Practices

  1. Use parent tasks for grouping related work
  2. Tag tasks for easy filtering and reporting
  3. Keep stages small β€” easier to retry and debug
  4. Log stage transitions β€” helps troubleshooting
  5. Handle failures gracefully β€” don’t crash the whole pipeline

Next Steps

Tutorial: Error Handling & Recovery

Things go wrong. Agents crash, tasks fail, Redis restarts. Here’s how to handle it.

Checking Agent Health

Use CLI commands to monitor agent state:

# Check all agents
tt status

# List agents and their states
tt list

# Check pending tasks
tt tasks

Agent states to watch for:

  • Idle β€” Ready for work βœ“
  • Working β€” Busy but healthy βœ“
  • Error β€” Something went wrong βœ—
  • Stopped β€” Agent terminated βœ—

Checking Task State

View task status with the CLI:

# See all pending tasks by agent
tt tasks

# Check a specific agent's inbox
tt inbox <agent-name>

Respawning Failed Agents

If an agent dies, spawn a new one:

# Check if agent exists
tt list

# If stopped or missing, respawn it
tt spawn worker-1 --model claude

# Or prune stale agents first
tt prune
tt spawn worker-1 --model claude

Graceful Shutdown

To stop agents gracefully:

# Stop a specific agent
tt kill worker-1

# Stop the entire town (saves state first)
tt save
tt stop

Recovery Checklist

When things go wrong:

  1. Check Redis β€” Is redis-server running?

    redis-cli -s ./redis.sock PING
    
  2. Check agent state β€” What state is it in?

    tt list
    tt status
    
  3. Check inbox β€” Are messages stuck?

    tt inbox <agent-name>
    
  4. Check tasks β€” What tasks are pending?

    tt tasks
    
  5. Check logs β€” Look in logs/ directory

Comparison with Gastown Recovery

FeatureTinytownGastown
Auto-recoveryManual (you write it)Witness patrol
State persistenceRedisGit-backed beads
Crash detectionCheck agent stateBoot/Deacon monitors
Work resumptionReassign tasksHook-based (automatic)

Tinytown puts you in control. Gastown automates more but is more complex. Choose based on your reliability requirements.

Tutorial: Mission Mode

Orchestrate multiple GitHub issues as a single autonomous mission.

What We’ll Build

A mission that implements a user authentication feature spanning three issues:

  1. Issue #1: Design auth API
  2. Issue #2: Implement auth endpoints
  3. Issue #3: Write auth tests

The issues have natural dependencies: design β†’ implement β†’ test.

Prerequisites

  • A running Tinytown instance (tt init, Redis available)
  • GitHub issues created with dependency markers
  • Multiple agents spawned with appropriate roles

Step 1: Create Issues with Dependencies

In your GitHub issues, use dependency markers:

Issue #1: Design auth API

Design the authentication API schema.

- Define login/logout endpoints
- Document token format
- Specify error responses

Issue #2: Implement auth endpoints

Implement the authentication endpoints.

Depends on #1.

- Implement login endpoint
- Implement logout endpoint
- Add token validation

Issue #3: Write auth tests

Write comprehensive tests for auth API.

After #2.

- Unit tests for token validation
- Integration tests for login flow
- Error case coverage

Step 2: Spawn Your Team

# Spawn agents with appropriate roles
tt spawn designer --model claude
tt spawn backend --model auggie
tt spawn tester --model codex

Step 3: Start the Mission

tt mission start --issue 1 --issue 2 --issue 3

Output:

πŸš€ Mission started: abc123-def456-...
πŸ“‹ Objectives: 3 issues
πŸ“¦ Work items: 3
   ⏳ Issue #1: Design auth API
   ⏳ Issue #2: Implement auth endpoints
   ⏳ Issue #3: Write auth tests

Step 4: Monitor Progress

# Check overall status
tt mission status

# Detailed work item view
tt mission status --work

# Watch PR/CI monitors
tt mission status --work --watch

Step 5: Understand the Scheduler

The mission scheduler runs every 30 seconds and:

  1. Promotes work items: Issue #1 starts immediately (no deps)
  2. Assigns to agents: Designer gets Issue #1
  3. Monitors completion: When #1 done, #2 becomes ready
  4. Watches PRs: Creates watch items for CI status
  5. Enforces gates: Reviewer approval before merge
Round 1: Issue #1 β†’ ready β†’ assigned to designer
Round 5: Issue #1 done β†’ Issue #2 ready β†’ assigned to backend
Round 10: Issue #2 done β†’ Issue #3 ready β†’ assigned to tester
Round 15: All done β†’ Mission completed

Step 6: Handle Blocking

If CI fails or review is needed:

# Check why mission is blocked
tt mission status --watch

# Output shows:
# 🚧 Watch Items: 1
#    ⚠️  PR #42 CI check: failing (retrying in 180s)

The mission will:

  • Auto-retry CI checks
  • Create fix tasks if bugbot comments
  • Wait for human review if reviewer_required

Step 7: Stop and Resume

# Pause the mission (can resume later)
tt mission stop abc123

# Resume when ready
tt mission resume abc123

Advanced: Custom Policy

# More parallelism
tt mission start -i 1 -i 2 -i 3 --max-parallel 4

# Skip reviewer (for drafts/experiments)
tt mission start -i 1 --no-reviewer

Advanced: Mission Manifest

For complex projects, create mission.toml:

# Override issue handling
[[overrides]]
issue = 1
owner_role = "architect"
priority = 10

[[overrides]]
issue = 2
depends_on = [1]
owner_role = "backend"

[[overrides]]
issue = 3
skip = true  # Exclude from mission

Then reference it (feature coming soon).

Troubleshooting

ProblemSolution
Mission stuck in PlanningCheck if issues are accessible
Work item never readyVerify dependency markers parsed
Agent not assignedSpawn agent with matching role
CI watch failingCheck GitHub API permissions

Best Practices

  1. Use clear dependency markers: depends on #N in issue body
  2. Keep issues focused: One objective per issue
  3. Role-tag your agents: Match agent roles to work types
  4. Monitor actively: Use --work flag to see progress
  5. Set appropriate parallelism: Don’t overwhelm your agents

Next Steps

tt bootstrap

Download and build Redis using an AI coding agent.

Synopsis

tt bootstrap [VERSION] [OPTIONS]

Description

Bootstraps Redis by delegating to an AI coding agent. The agent:

  1. Fetches the release info from https://github.com/redis/redis/releases
  2. Downloads the source tarball
  3. Builds Redis from source (make)
  4. Installs binaries to ~/.tt/bin/

This gets you the latest Redis compiled and optimized for your machine.

Arguments

ArgumentDescription
[VERSION]Redis version to install (default: latest)

Options

OptionShortDescription
--model <CLI>-mAI CLI to use (default: claude)

Examples

Install Latest Redis

tt bootstrap

Install Specific Version

tt bootstrap 8.0.2

Use Different AI CLI

tt bootstrap --model auggie
tt bootstrap --model codex

Output

πŸš€ Bootstrapping Redis latest to /Users/you/.tt
   Using claude to download and build Redis...

πŸ“‹ Running: claude --print --dangerously-skip-permissions < ~/.tt/bootstrap_prompt.md
   (This may take a few minutes to download and compile)

   [Agent output as it downloads and builds...]

βœ… Redis installed successfully!

   Add to your PATH:
   export PATH="/Users/you/.tt/bin:$PATH"

   Or add to ~/.zshrc or ~/.bashrc for persistence.

   Then run: tt init

After Bootstrap

Add Redis to your PATH:

# Add to current session
export PATH="$HOME/.tt/bin:$PATH"

# Add to shell permanently
echo 'export PATH="$HOME/.tt/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

# Verify
redis-server --version

Then initialize a town:

tt init

Why Bootstrap?

MethodProsCons
tt bootstrapLatest version, optimized for your CPUTakes a few minutes to build
brew install redisQuick, easyMay not have latest 8.0+
apt install redisSystem packageOften outdated version

Alternative Installation Methods

If bootstrap fails or you prefer package managers:

macOS (Homebrew)

brew install redis

Ubuntu/Debian

sudo apt update
sudo apt install redis-server

From Source (Manual)

curl -LO https://github.com/redis/redis/archive/refs/tags/8.0.2.tar.gz
tar xzf 8.0.2.tar.gz
cd redis-8.0.2
make
sudo make install

See Also

tt init

Initialize a new town.

Synopsis

tt init [OPTIONS]

Description

Creates a new Tinytown workspace in the current directory. This:

  1. Creates the directory structure (agents/, logs/, tasks/)
  2. Generates tinytown.toml configuration
  3. Starts a Redis server with Unix socket
  4. Verifies Redis 8.0+ is installed

Options

OptionShortDescription
--name <NAME>-nTown name (defaults to <repo>-<branch>)
--town <PATH>-tTown directory (defaults to .)
--verbose-vEnable verbose logging

Default Name

If --name is not provided, the town name is automatically derived from:

  1. Git repo + branch: <repo-name>-<branch-name> (e.g., redisearch-feature-auth)
  2. Git repo only: If no branch is available
  3. Directory name: Fallback if not in a git repo

This makes it easy to have unique town names per feature branch.

Examples

Basic Initialization (Auto-Named)

cd ~/git/my-project
git checkout feature-auth
tt init
# Town name: my-project-feature-auth

With Custom Name

tt init --name "My Awesome Project"

Initialize in Different Directory

tt init --town ./projects/new-project --name new-project

Output

✨ Initialized town 'my-project' at .
πŸ“‘ Redis running with Unix socket for fast message passing
πŸš€ Run 'tt spawn <name>' to create agents

Files Created

my-project/
β”œβ”€β”€ tinytown.toml     # Configuration
β”œβ”€β”€ agents/           # Agent working directories
β”œβ”€β”€ logs/             # Activity logs
└── tasks/            # Task storage

Configuration

The generated tinytown.toml:

name = "my-project"
default_cli = "claude"
max_agents = 10

[redis]
use_socket = true
socket_path = "redis.sock"

[agent_clis.claude]
name = "claude"
command = "claude --print"

[agent_clis.auggie]
name = "auggie"
command = "augment"

[agent_clis.codex]
name = "codex"
command = "codex"

Errors

Redis Not Found

Error: Redis not found. Please install Redis 8.0+ and ensure 'redis-server' is on your PATH.
See: https://redis.io/downloads/

Solution: Install Redis 8.0+ and add to PATH.

Redis Version Too Old

Error: Redis version 7.4 is too old. Tinytown requires Redis 8.0 or later.
See: https://redis.io/downloads/

Solution: Upgrade to Redis 8.0+.

Directory Already Initialized

If tinytown.toml already exists, init will fail. Use tt start to connect to an existing town.

See Also

tt spawn

Create and start a new agent.

Synopsis

tt spawn <NAME> [OPTIONS]

Description

Spawns a new worker agent in the town. This actually starts an AI process!

The agent:

  1. Registers in Redis with state Starting
  2. Starts a background process (or foreground with --foreground)
  3. Runs in a loop, checking inbox for tasks
  4. Executes the AI model (claude, auggie, etc.) for each task
  5. Stops after --max-rounds iterations

Arguments

ArgumentDescription
<NAME>Human-readable agent name (e.g., worker-1, backend, reviewer)

Options

OptionShortDescription
--model <MODEL>-mAI CLI to use (default: from tinytown.toml)
--max-rounds <N>Maximum iterations before stopping (default: 10)
--foregroundRun in foreground instead of background
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Setting the Default CLI

Edit tinytown.toml to change which AI CLI is used by default:

name = "my-town"
default_cli = "auggie"

Then all tt spawn commands use that CLI:

tt spawn backend              # Uses auggie (from config)
tt spawn frontend --model codex   # Override to use codex

Built-in Agent CLIs

CLICommand (non-interactive)
claudeclaude --print --dangerously-skip-permissions
auggieauggie --print
codexcodex exec --dangerously-bypass-approvals-and-sandbox
aideraider --yes --no-auto-commits --message
geminigemini
copilotgh copilot
cursorcursor

These are the CLI tools that run AI coding agents, not the underlying models.

Examples

Spawn in Background (Default)

tt spawn worker-1
# Agent runs in background, logs to logs/worker-1.log

Spawn in Foreground (See Output)

tt spawn worker-1 --foreground
# Agent runs in this terminal, you see all output

Limit Iterations

tt spawn worker-1 --max-rounds 5
# Agent stops after 5 rounds (default is 10)

Spawn Multiple Agents (Parallel!)

tt spawn backend &
tt spawn frontend &
tt spawn tester &
# All three run in parallel

Output

πŸ€– Spawned agent 'backend' using model 'auggie'
   ID: 550e8400-e29b-41d4-a716-446655440000
πŸ”„ Starting agent loop in background (max 10 rounds)...
   Logs: ./logs/backend.log
   Agent running in background. Check status with 'tt status'

What Happens

  1. Agent registered in Redis (tt:<town>:agent:<id>)
  2. Background process started running tt agent-loop
  3. Agent loop:
    • Checks inbox for messages
    • If messages: builds prompt, runs AI model
    • Model output logged to logs/<name>_round_<n>.log
    • Repeats until --max-rounds reached
  4. Agent stops with state Stopped

Agent Naming

Choose descriptive names:

Good NamesWhy
backendDescribes the work area
worker-1Simple numbered workers
reviewerDescribes the role
alicePersonality names work too
AvoidWhy
agentToo generic
aNot descriptive
SpacesUse hyphens instead

Agent State After Spawn

New agents start in Starting state, then transition to Idle:

Starting β†’ Idle (ready for work)

Check state with:

tt list

Errors

Town Not Initialized

Error: Town not initialized at . Run 'tt init' first.

Solution: Run tt init or specify --town path.

Agent Already Exists

Agents are tracked by name. Spawning the same name creates a new agent with a new ID.

See Also

tt restart

Restart a stopped agent with fresh rounds.

Synopsis

tt restart <AGENT> [OPTIONS]

Description

Restarts an agent that is in a terminal state (Stopped or Error). Resets the agent’s state to Idle, clears any stop flags, and spawns a new agent process with fresh rounds.

The agent must already exist and be stopped. To create a new agent, use tt spawn.

Arguments

ArgumentDescription
AGENTName of the agent to restart

Options

OptionDescription
--rounds <N>Maximum rounds for restarted agent (default: 10)
--foregroundRun in foreground instead of backgrounding
--town <PATH>Town directory (default: .)
--verboseEnable verbose logging

Examples

Basic Restart

tt restart worker-1

Output:

πŸ”„ Restarting agent 'worker-1'...
   Rounds: 10
   Log: .tt/logs/worker-1.log

βœ… Agent 'worker-1' restarted

Restart with More Rounds

tt restart worker-1 --rounds 20

Restart in Foreground

tt restart worker-1 --foreground
# Agent runs in terminal, you see all output

Error: Agent Still Active

tt restart worker-1

Output:

❌ Agent 'worker-1' is still active (Working)
   Use 'tt kill worker-1' to stop it first

Error: Agent Not Found

tt restart nonexistent

Output:

❌ Agent 'nonexistent' not found

Restart vs Spawn

CommandUse Case
tt restartRevive existing stopped agent (keeps ID)
tt spawnCreate brand new agent (new ID)

Common Workflow

After agent exhausts its rounds:

# Check status
tt status
# Shows: worker-1 (Stopped) - completed 10/10 rounds

# Restart with more rounds
tt restart worker-1 --rounds 15

See Also

tt assign

Assign a task to an agent.

Synopsis

tt assign <AGENT> <TASK>

Description

Creates a new task record and sends it to the specified agent’s inbox as a semantic task message.

tt assign sends a semantic task message and is the right command for actionable work. Use tt send for non-task communication such as queries, informational updates, or confirmations.

Arguments

ArgumentDescription
<AGENT>Agent name to assign to
<TASK>Task description (quoted string)

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Basic Assignment

tt assign worker-1 "Implement the login API"

Multi-line Task

tt assign backend "Create a REST API endpoint POST /users that:
- Accepts {email, password, name}
- Validates email format
- Hashes password with bcrypt
- Returns {id, email, name, created_at}"

Assign to Multiple Agents

tt assign frontend "Build the login form"
tt assign backend "Build the auth API"
tt assign tester "Write integration tests"

Output

πŸ“‹ Assigned task 550e8400-e29b-41d4-a716-446655440000 to agent 'worker-1'

What Happens

  1. Task created with state Pending
  2. Task stored in Redis at tt:<town>:task:<id>
  3. Message sent to agent’s inbox at tt:<town>:inbox:<agent-id>
  4. Task state updated to Assigned

Task Lifecycle After Assignment

Pending β†’ Assigned β†’ Running β†’ Completed
                           └─→ Failed
                           └─→ Cancelled

Viewing Assigned Tasks

Check what’s in an agent’s inbox:

# Using redis-cli (replace <town> with your town name)
redis-cli -s ./redis.sock LLEN tt:<town>:inbox:<agent-id>
redis-cli -s ./redis.sock LRANGE tt:<town>:inbox:<agent-id> 0 -1

Or check status:

tt status

Errors

Agent Not Found

Error: Agent not found: nonexistent

Solution: Spawn the agent first with tt spawn.

Town Not Initialized

Error: Town not initialized at . Run 'tt init' first.

Task Description Tips

Good task descriptions:

  • Be specific about what to build
  • Include acceptance criteria
  • Mention relevant files/paths
  • Specify output format if needed
# βœ… Good
tt assign backend "Create POST /api/users endpoint in src/routes/users.rs. 
Accept JSON body {email, password}. Return 201 with {id, email}."

# ❌ Too vague
tt assign backend "Build API"

Use tt assign when the recipient should do concrete work, not just acknowledge or discuss.

See Also

tt backlog

Manage the global backlog of unassigned tasks.

Synopsis

tt backlog <SUBCOMMAND> [OPTIONS]

Description

Use backlog when work should exist in Tinytown but should not be assigned immediately.

Backlog tasks are stored in Redis, can be tagged, and can be claimed later by the right agent.

Subcommands

Add

tt backlog add "<TASK DESCRIPTION>" [--tags tag1,tag2]

Creates a new task and places it in the global backlog queue.

List

tt backlog list

Shows all backlog task IDs with a short description and tags.

Claim

tt backlog claim <TASK_ID> <AGENT>

Removes a task from backlog, assigns it to <AGENT>, and sends a semantic TaskAssign message to that agent.

Assign All

tt backlog assign-all <AGENT>

Bulk-assigns every backlog task to one agent (useful for manual catch-up or handoff).

Remove

tt backlog remove <TASK_ID>

Removes a task from the backlog without assigning it to any agent. Useful for cleaning up tasks that are no longer needed or were added by mistake. The task record is deleted as part of the removal.

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Park Work in Backlog

tt backlog add "Investigate flaky auth integration test" --tags test,auth,backend
tt backlog add "Document token refresh behavior" --tags docs,api

Review and Claim by Role

# Backend agent role
tt backlog list
tt backlog claim 550e8400-e29b-41d4-a716-446655440000 backend

# Docs agent role
tt backlog claim 550e8400-e29b-41d4-a716-446655440111 docs

Role-Based Claiming Pattern

When agents are idle, have them:

  1. Run tt backlog list
  2. Claim one task matching their role/tags
  3. Work it to completion, then repeat

This keeps specialists busy without over-assigning work up front.

See Also

tt list

List all agents in the town.

Synopsis

tt list [OPTIONS]

Description

Shows all agents registered in the town with their current state.

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

List All Agents

tt list

Output

Agents:
  backend (550e8400-e29b-41d4-a716-446655440000) - Working
  frontend (6ba7b810-9dad-11d1-80b4-00c04fd430c8) - Idle
  reviewer (6ba7b811-9dad-11d1-80b4-00c04fd430c9) - Idle

No Agents

No agents. Run 'tt spawn <name>' to create one.

Agent States

StateMeaning
StartingAgent is initializing
IdleReady for work
WorkingExecuting a task
PausedTemporarily stopped
StoppedTerminated
ErrorSomething went wrong

See Also

tt status

Show town status.

Synopsis

tt status [OPTIONS]

Description

Displays comprehensive status of the town including:

  • Town name and location
  • Redis connection info
  • All agents with their states and pending messages
  • Message type breakdown for pending inbox items (tasks, queries, informational, confirmations)
  • With --deep: Recent activity from each agent

Options

OptionShortDescription
--deepShow recent agent activity (stored in Redis)
--tasksShow detailed task breakdown by state and agent
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Basic Status

tt status

Output:

🏘️  Town: my-project
πŸ“‚ Root: /Users/you/projects/my-project
πŸ“‘ Redis: unix:///Users/you/projects/my-project/redis.sock
πŸ€– Agents: 3
   backend (Working) - 0 messages pending
   frontend (Idle) - 2 messages pending (tasks: 1, queries: 1, informational: 0, confirmations: 0)
   reviewer (Idle) - 1 messages pending (tasks: 0, queries: 0, informational: 1, confirmations: 0)

Deep Status (with stats and activity)

tt status --deep

Output:

🏘️  Town: my-project
πŸ“‚ Root: /Users/you/projects/my-project
πŸ“‘ Redis: unix:///Users/you/projects/my-project/redis.sock
πŸ€– Agents: 3
   backend (Working) - 0 pending, 12 rounds, uptime 1h 23m
      └─ Round 12: βœ… completed
      └─ Round 11: βœ… completed
   frontend (Idle) - 2 pending (tasks: 1, queries: 1, informational: 0, confirmations: 0), 5 rounds, uptime 45m 12s
      └─ Round 5: βœ… completed
   reviewer (Idle) - 1 pending (tasks: 0, queries: 0, informational: 1, confirmations: 0), 2 rounds, uptime 30m 5s
      └─ Round 2: ⚠️ model error

πŸ“Š Stats: rounds completed, uptime since spawn

Task Status (detailed task tracking)

tt status --tasks

Output:

🏘️  Town: my-project
πŸ“‚ Root: /Users/you/projects/my-project
πŸ“‘ Redis: unix:///Users/you/projects/my-project/redis.sock
πŸ€– Agents: 2
   backend (Working) - 0 messages pending
   reviewer (Idle) - 1 messages pending
πŸ“‹ Tasks: 8 total (2 pending, 3 in-flight, 3 done)

πŸ“Š Task Breakdown by State:
   ⏳ Pending:   1
   πŸ“Œ Assigned:  1
   πŸ”„ Running:   2
   βœ… Completed: 3
   ❌ Failed:    0
   🚫 Cancelled: 1
   πŸ“‹ Backlog:   2

πŸ“‹ Tasks by Agent:
   backend (2 active, 2 done):
      πŸ”„ abc123 Implement user authentication...
      πŸ“Œ def456 Add rate limiting to API...
      βœ… ghi789 Setup database migrations...
      βœ… jkl012 Create user model...
   reviewer (1 active, 1 done):
      πŸ”„ mno345 Review auth implementation...
      βœ… pqr678 Review database schema...
   (unassigned) (2 tasks):
      ⏳ stu901 Write integration tests...

Stats Shown

StatDescription
RoundsNumber of agent loop iterations completed
UptimeTime since agent was spawned
PendingMessages waiting in inbox
Message TypesPending breakdown: tasks, queries, informational, confirmations
ActivityRecent round results (last 5)
Task StatesWith --tasks: Pending, Assigned, Running, Completed, Failed, Cancelled, Backlog
Tasks by AgentWith --tasks: Tasks grouped by assigned agent with state icons

Output Fields

FieldDescription
TownName from tinytown.toml
RootAbsolute path to town directory
RedisConnection URL (socket or TCP)
AgentsCount and details

Agent Details

For each agent:

  • Name β€” Human-readable identifier
  • State β€” Current lifecycle state
  • Messages β€” Number of pending inbox messages
  • Type Breakdown β€” Pending messages grouped as tasks, queries, informational, confirmations

Interpreting Status

SituationMeaningAction
Agent Idle + 0 messagesReady for workAssign a task
Agent Idle + N messagesMessages waitingAgent should process
Agent WorkingBusy with taskWait or check progress
Agent ErrorSomething failedCheck logs, respawn
CommandWhen to Use
tt statusOverview of everything
tt listJust agent names and states

Direct Redis Inspection

For more detail:

# Connect to Redis
redis-cli -s ./redis.sock

# List all keys for your town
KEYS tt:<town_name>:*

# Check specific inbox
LLEN tt:<town_name>:inbox:550e8400-e29b-41d4-a716-446655440000

# View agent state
GET tt:<town_name>:agent:550e8400-e29b-41d4-a716-446655440000

See Also

tt tasks (Deprecated)

Note: The tt tasks command has been deprecated and replaced by tt inbox --all. Please use tt inbox --all instead.

Migration

The functionality of tt tasks is now available via:

tt inbox --all

See tt inbox for full documentation.

See Also

  • tt inbox β€” Check agent inboxes (with --all flag)
  • tt task β€” Manage individual tasks

tt task

Manage individual tasks.

Synopsis

tt task <SUBCOMMAND> [OPTIONS]

Description

Provides operations for managing specific tasks by ID: completing, viewing details, or listing tasks.

Subcommands

Complete

Mark a task as completed:

tt task complete <TASK_ID> [--result <MESSAGE>]

Show

View details of a specific task:

tt task show <TASK_ID>

List

List all tasks with optional filtering:

tt task list [--state <STATE>]

States: pending, assigned, running, completed, failed, cancelled

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Mark a Task Complete

tt task complete 550e8400-e29b-41d4-a716-446655440000 --result "Fixed the bug"

Output:

βœ… Task 550e8400-e29b-41d4-a716-446655440000 marked as completed
   Description: Fix authentication bug
   Result: Fixed the bug

View Task Details

tt task show 550e8400-e29b-41d4-a716-446655440000

Output:

πŸ“‹ Task: 550e8400-e29b-41d4-a716-446655440000
   Description: Fix authentication bug
   State: Running
   Assigned to: backend-1
   Created: 2025-03-09T12:00:00Z
   Updated: 2025-03-09T12:05:00Z
   Started: 2025-03-09T12:01:00Z
   Tags: backend, auth

List Running Tasks

tt task list --state running

Output:

πŸ“‹ Tasks (2):
   πŸ”„ 550e8400-... - Fix authentication bug [backend-1]
   πŸ”„ 660e9500-... - Update API endpoints [backend-2]

List All Tasks

tt task list

State icons:

  • ⏳ Pending
  • πŸ“Œ Assigned
  • πŸ”„ Running
  • βœ… Completed
  • ❌ Failed
  • 🚫 Cancelled

See Also

tt inbox

Check agent inbox(es).

Synopsis

tt inbox <AGENT>         # Check specific agent
tt inbox --all           # Show all agents' inboxes

Description

Shows pending messages in agent inboxes.

When used with --all, displays a summary of pending messages for all agents, categorized by type:

  • [T] Tasks requiring action
  • [Q] Queries awaiting response
  • [I] Informational messages (FYI)
  • [C] Confirmations/acknowledgments

Messages are added by:

  • tt assign β€” Creates a task and sends TaskAssign message
  • tt send β€” Sends a custom message

Arguments

ArgumentDescription
[AGENT]Agent name to check (optional with --all)

Options

OptionShortDescription
--all-aShow pending messages for all agents
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Check Specific Agent

tt inbox backend

Output:

πŸ“¬ Inbox for 'backend': 3 messages

View All Agents’ Inboxes

tt inbox --all

Output:

πŸ“‹ Pending Messages by Agent:

  backend (Working):
    [T] 2 tasks requiring action
    [Q] 1 queries awaiting response
    [I] 3 informational
    [C] 0 confirmations
    β€’ Fix authentication bug in login endpoint
    β€’ Update database schema for new fields

  reviewer (Idle):
    [T] 1 tasks requiring action
    [Q] 0 queries awaiting response
    [I] 0 informational
    [C] 2 confirmations
    β€’ Review PR #42: Add user validation

Total: 4 actionable message(s)

Comparison with tt status

  • tt status shows agent states and total inbox counts
  • tt inbox --all shows message breakdown and previews

See Also

tt send

Send a message to an agent.

Synopsis

tt send <TO> <MESSAGE> [OPTIONS]

Description

Sends a semantic message to an agent’s inbox. The agent will receive it on their next inbox check.

Message semantics:

  • Default (no semantic flag): Task-style/actionable message
  • --query: Question that expects a response or decision
  • --info: Informational update (context only)
  • --ack: Confirmation/receipt message

With --urgent: Message goes to priority inbox, processed before regular messages!

Use this for:

  • Agent-to-agent communication
  • Conductor instructions
  • Custom coordination
  • Urgent: Interrupt agents with priority messages

Arguments

ArgumentDescription
<TO>Target agent name
<MESSAGE>Message content

Options

OptionDescription
--queryMark message as a query (query semantic type)
--infoMark message as informational (info semantic type)
--ackMark message as confirmation (ack semantic type)
--urgentSend as urgent (processed first at start of next round)

Examples

Send a Regular Message

tt send backend "The API spec is ready in docs/api.md"

Output:

πŸ“€ Sent task message to 'backend'

Send a Query

tt send backend --query "Can you take auth token refresh next?"

Send Informational Context

tt send reviewer --info "CI is green on commit a1b2c3d"

Send a Confirmation

tt send conductor --ack "Received. I'll start after current task."

Send an URGENT Message

tt send backend --urgent "STOP! Security vulnerability found. Do not merge."

Output:

🚨 Sent URGENT task message to 'backend'

The agent will see this at the start of their next round, before processing regular inbox.

Coordination Between Agents

# Developer finishes, notifies reviewer
tt send reviewer "Implementation complete. Please review src/auth.rs"

# Critical bug found - urgent interrupt
tt send developer --urgent "Critical: SQL injection in login. Fix immediately."

How It Works

Regular Messages

  1. Goes to tt:<town>:inbox:<id> (Redis list)
  2. Processed in order with other messages
  3. Agent sees it when they check inbox
  4. Semantic type is attached as task, query, info, or ack

Urgent Messages

  1. Goes to tt:<town>:urgent:<id> (separate priority queue)
  2. Agent checks urgent queue FIRST at start of each round
  3. Urgent messages injected into agent’s prompt with 🚨 marker
  4. Processed before regular inbox
  5. Keeps its semantic type (task, query, info, or ack)

See Also

tt kill

Stop an agent gracefully.

Synopsis

tt kill <AGENT>

Description

Requests an agent to stop gracefully. The agent will:

  1. Finish its current model run (if any)
  2. Check the stop flag at the start of next round
  3. Exit cleanly with state Stopped

This is a graceful stop, not an immediate kill. The agent completes its current work before stopping.

Arguments

ArgumentDescription
<AGENT>Agent name to stop

Examples

Stop a Single Agent

tt kill backend

Output:

πŸ›‘ Requested stop for agent 'backend'
   Agent will stop at the start of its next round.

Stop All Agents (Cleanup)

tt kill backend
tt kill frontend
tt kill reviewer

Check That Agent Stopped

tt status

Output:

πŸ€– Agents: 3
   backend (Stopped) - 0 messages pending
   frontend (Idle) - 0 messages pending
   reviewer (Working) - 1 messages pending

How It Works

  1. Sets a stop flag in Redis: tt:stop:<agent-id>
  2. Agent checks this flag at start of each round
  3. If flag is set, agent exits loop gracefully
  4. Flag has 1-hour TTL (auto-cleanup if agent already dead)

When to Use

  • Work complete: All tasks finished, clean up agents
  • Stuck agent: Agent not making progress, stop and respawn
  • Resource cleanup: Free up system resources
  • Reconfigure: Stop agent to change model or settings

See Also

tt prune

Remove stopped or stale agents from Redis.

Synopsis

tt prune [OPTIONS]

Description

Removes agents that are in a terminal state (Stopped or Error) from Redis. This cleans up agent records after they’ve finished or failed. Useful for managing long-running towns where many agents have come and gone.

Options

OptionShortDescription
--allRemove ALL agents, not just stopped ones
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Prune Only Stopped Agents

tt prune

Output:

πŸ—‘οΈ  Removed worker-1 (abc123...) - Stopped
πŸ—‘οΈ  Removed worker-2 (def456...) - Error
✨ Pruned 2 agent(s)

Remove All Agents

tt prune --all

⚠️ Warning: This removes ALL agents including active ones.

Common Workflow

After recovering orphaned agents, prune them:

# First, recover any orphaned agents (marks them as stopped)
tt recover

# Then remove stopped agents from Redis
tt prune

See Also

tt recover

Detect and clean up orphaned agents.

Synopsis

tt recover [OPTIONS]

Description

Scans for agents that appear to be running (Working/Starting state) but whose processes have actually crashed or been killed. Marks these orphaned agents as Stopped so they can be pruned or restarted.

An agent is considered orphaned if:

  1. It’s in Working or Starting state
  2. Its log file hasn’t been modified in 2+ minutes, OR
  3. Its last heartbeat was 2+ minutes ago

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Scan for Orphaned Agents

tt recover

Output when orphans found:

πŸ” Scanning for orphaned agents...
   πŸ”„ Recovered 'worker-1' (Working) - last heartbeat 5m ago
   πŸ”„ Recovered 'worker-2' (Working) - last heartbeat 3m ago

✨ Recovered 2 orphaned agent(s) (4 total checked)
   Run 'tt prune' to remove them from Redis

Output when no orphans:

πŸ” Scanning for orphaned agents...

✨ No orphaned agents found (3 agents checked)

Common Workflow

After a crash or system restart:

# 1. Recover orphaned agents (marks them stopped)
tt recover

# 2. Optional: Reclaim tasks from dead agents
tt reclaim --to-backlog

# 3. Clean up stopped agents
tt prune

# 4. Restart needed agents
tt restart worker-1

When to Use

  • After system restart or crash
  • When agents appear β€œstuck” in Working state
  • Before reclaiming tasks from dead agents

See Also

tt reclaim

Recover orphaned tasks from dead agents.

Synopsis

tt reclaim [OPTIONS]

Description

Finds tasks assigned to agents in terminal states (Stopped/Error) and moves them elsewhere. This prevents work from being lost when agents crash.

Without destination flags, lists orphaned tasks. With flags, moves them.

Options

OptionDescription
--to-backlogMove orphaned tasks to the global backlog
--to <AGENT>Move orphaned tasks to a specific agent
--from <AGENT>Reclaim only from a specific dead agent
--town <PATH>Town directory (default: .)
--verboseEnable verbose logging

Examples

Preview Orphaned Tasks

tt reclaim

Output:

πŸ”„ Reclaiming orphaned tasks...
   worker-1 (Stopped): 3 message(s)
      task: 550e8400-e29b-41d4-a716-446655440000
      task: Fix authentication bug in login endpoint
      task: Update database schema

πŸ“‹ Found 3 orphaned task(s)
   Use --to-backlog or --to <agent> to reclaim them

Move Tasks to Backlog

tt reclaim --to-backlog

Output:

πŸ”„ Reclaiming orphaned tasks...
   worker-1 (Stopped): 3 message(s)
      β†’ backlog: 550e8400-e29b-41d4-a716-446655440000
      β†’ backlog: 660e9500-e29b-41d4-a716-446655440111
      β†’ backlog: 770f0600-e29b-41d4-a716-446655440222

βœ… Moved 3 task(s) to backlog

Reassign to Another Agent

tt reclaim --to worker-2

Output:

πŸ”„ Reclaiming orphaned tasks...
   worker-1 (Stopped): 3 message(s)
      β†’ worker-2: 550e8400-e29b-41d4-a716-446655440000
      β†’ worker-2: Fix authentication bug in login endpoint
      β†’ worker-2: Update database schema

βœ… Moved 3 task(s) to 'worker-2'

Reclaim from Specific Agent

tt reclaim --from worker-1 --to-backlog

Common Workflow

After a crash:

# 1. Recover orphaned agents first
tt recover

# 2. Reclaim their tasks to backlog
tt reclaim --to-backlog

# 3. Clean up stopped agents
tt prune

# 4. Restart or spawn new agents
tt restart worker-1
# or
tt spawn worker-3

# 5. Let agents claim from backlog

See Also

tt reset

Reset all town state, clearing agents, tasks, and messages from Redis.

Synopsis

tt reset [OPTIONS]

Description

Performs a complete reset of the town’s Redis state. This is useful when you want to start fresh without reinitializing the entire town, or when cleaning up after a failed run.

⚠️ Warning: This operation cannot be undone. All agents, tasks, and messages will be permanently deleted.

Options

OptionDescription
--forceSkip the confirmation prompt and proceed immediately
--agents-onlyOnly reset agent-related state (agents and inboxes), preserving tasks and backlog
--town <PATH>Town directory (default: .)
--verboseEnable verbose logging

Examples

Full Reset (with confirmation)

tt reset

Output:

πŸ—‘οΈ  Resetting town 'my-project'
   This will delete:
   - 3 agent(s)
   - 12 task(s)
   - 2 backlog item(s)

⚠️  This action cannot be undone!
   Run with --force to confirm: tt reset --force

Full Reset (immediate)

tt reset --force

Output:

πŸ—‘οΈ  Resetting town 'my-project'
   This will delete:
   - 3 agent(s)
   - 12 task(s)
   - 2 backlog item(s)

βœ… Reset complete: deleted 47 Redis keys
   Run 'tt spawn <name>' to create new agents

Agents-Only Reset

Reset just the agents while preserving tasks and backlog:

tt reset --agents-only --force

Output:

πŸ—‘οΈ  Resetting agents in town 'my-project'
   This will delete:
   - 3 agent(s) and their inboxes
   Tasks and backlog will be preserved.

βœ… Reset complete: deleted 12 Redis keys (agents only)
   Run 'tt spawn <name>' to create new agents

Use Cases

ScenarioCommand
Start completely freshtt reset --force
Replace agents but keep taskstt reset --agents-only --force
Preview what will be deletedtt reset (no –force)

What Gets Deleted

Full Reset (tt reset --force)

  • All registered agents (tt:<town>:agent:*)
  • All agent inboxes (tt:<town>:inbox:*)
  • All tasks (tt:<town>:task:*)
  • All backlog items (tt:<town>:backlog)
  • Agent activity logs

Agents-Only Reset (tt reset --agents-only --force)

  • All registered agents (tt:<town>:agent:*)
  • All agent inboxes (tt:<town>:inbox:*)
  • Agent activity logs

Tasks and backlog are preserved.

Recovery

If you accidentally reset:

  • If you previously ran tt save, you may be able to tt restore from the AOF file
  • If tasks were synced to tasks.toml, you can tt sync push to recreate them

See Also

  • tt init β€” Initialize a new town
  • tt spawn β€” Create agents after reset
  • tt save β€” Save state before reset
  • tt restore β€” Restore from saved state

tt conductor

Start the conductor - an AI agent that orchestrates your town.

Synopsis

tt conductor [OPTIONS]

Description

The conductor is an AI agent (using your default model) that coordinates your Tinytown! πŸš‚

Like the train conductor guiding the miniature train through Tiny Town, Colorado, it:

  • Understands what you want to build
  • Breaks down work into tasks
  • Spawns appropriate agents
  • Assigns tasks to agents
  • Keeps unassigned work in backlog
  • Monitors progress
  • Helps resolve blockers

The conductor knows how to use the tt CLI to orchestrate your project.

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

How It Works

  1. Context Injection: The conductor receives context about:

    • Current town state (agents, tasks, pending messages)
    • Whether this is a fresh start or resuming an existing session
    • Available tt commands and suggested team roles
    • Its role as orchestrator
  2. AI Model Launch: Your default model (claude, auggie, etc.) starts with this context

  3. Natural Conversation: You describe what you want, the AI orchestrates

Fresh Start vs Resuming

The conductor behaves differently based on existing state:

Fresh Start (No Agents)

When starting with a new town, the conductor:

  1. Asks what you’re trying to build
  2. Offers to analyze the project (README, design docs, codebase)
  3. Suggests appropriate team roles for your project
  4. Helps break down your idea into tasks and agent assignments

Resuming (Existing Agents)

When agents already exist, the conductor:

  1. Shows current agent status
  2. Checks progress with tt status --deep
  3. Continues coordinating from where you left off

Suggested Team Roles

The conductor knows about common team roles and when to suggest them:

RoleWhen to Suggest
backendAPI development, server-side logic
frontendUI/UX implementation
testerWriting and running tests
reviewerAlways include - quality gate
devopsCI/CD, deployment, infrastructure
securitySecurity review, vulnerability analysis
docsDocumentation, API specs, README
architectSystem design, code structure

The conductor will analyze your project and suggest roles that make sense.

Example Session

$ tt conductor
πŸš‚ Starting conductor with claude model...
   Context: ./.conductor_context.md

   Running: claude --print

# Tinytown Conductor

You are the **conductor** of Tinytown "my-project"...
[context displayed]

---

User: Build a user authentication system with login, signup, and password reset.

Conductor: I'll set up a team for this. Let me spawn some agents and create a plan.

[Conductor runs: tt spawn architect]
[Conductor runs: tt spawn backend]
[Conductor runs: tt spawn tester]

I've created three agents. Now let me assign the initial work:

[Conductor runs: tt assign architect "Design REST API for user authentication..."]

The architect is working on the API design. Once complete, I'll assign implementation to the backend agent and tests to the tester.

[Conductor runs: tt status]

Current status:
- architect (Working) - designing the API
- backend (Idle) - waiting for design
- tester (Idle) - will write tests after implementation

The Reviewer Pattern

The conductor always spawns a reviewer agent. This creates a simple completion protocol:

Worker completes task
       ↓
Conductor assigns review task to reviewer
       ↓
Reviewer checks work β†’ approves or requests changes
       ↓
Conductor marks complete or assigns fixes

This keeps it simple:

  • Workers do the work
  • Reviewer decides if it’s done
  • Conductor coordinates everything

Backlog Pattern

Use backlog for work that should exist but should not be assigned yet:

tt backlog add "Task needing ownership decision" --tags backend,auth
tt backlog list
tt backlog claim <task_id> <agent>

A practical approach:

  • Conductor adds uncertain work to backlog
  • Idle agents review backlog
  • Agents claim role-matching tasks

The Conductor’s Context

The conductor receives a markdown context file that includes:

# Tinytown Conductor

You are the **conductor** of Tinytown "my-project"...

## Current Town State
- Agents: backend (Working), reviewer (Idle)
- Tasks pending: 1

## Your Capabilities
- tt spawn <name> - Create agents
- tt assign <agent> "task" - Assign work
- tt backlog list - Review unassigned tasks
- tt backlog claim <task_id> <agent> - Claim backlog task
- tt task complete <task_id> --result "summary" - Mark task done
- tt status - Check progress

## The Reviewer Pattern
Always spawn a reviewer. They decide when work is done.

## Your Role
1. Break down user requests into tasks
2. Spawn workers + reviewer
3. Assign work, then assign review
4. Coordinate until reviewer approves
5. Save state with `tt sync pull`, suggest git commit

Comparison with gt mayor attach

GastownTinytown
gt mayor attachtt conductor
Natural languageNatural language βœ“
Mayor is complex orchestratorConductor is simple AI + CLI
Hard to understand what Mayor doesYou can read the context
Recovery daemons, convoys, beadsJust tt commands

The conductor is transparent: you can see exactly what context it has and what commands it runs.

See Also

tt plan

Plan tasks in a file before starting work.

Synopsis

tt plan [OPTIONS]
tt plan --init

Description

The plan command lets you define tasks in tasks.toml before syncing them to Redis. This enables:

  1. Version control β€” Check in your task plan with your code
  2. Offline planning β€” Edit tasks without Redis running
  3. Review before execution β€” Plan the work, then start the train

Options

OptionShortDescription
--init-iCreate a new tasks.toml file
--town <PATH>-tTown directory (default: .)

Examples

Initialize a Task Plan

tt plan --init

Creates tasks.toml:

[meta]
description = "Task plan for this project"

[[tasks]]
id = "example-1"
description = "Example task - replace with your own"
status = "pending"
tags = ["example"]

View Current Plan

tt plan

Output:

πŸ“‹ Tasks in plan (./tasks.toml):
  ⏳ [unassigned] example-1 - Example task - replace with your own

Edit and Sync

# Edit tasks.toml with your editor
vim tasks.toml

# Push to Redis
tt sync push

Task File Format

[meta]
description = "Sprint 1 tasks"
default_agent = "developer"  # Optional default

[[tasks]]
id = "auth-api"
description = "Build user authentication API"
agent = "backend"
status = "pending"
tags = ["auth", "api"]

[[tasks]]
id = "auth-tests"
description = "Write auth API tests"
agent = "tester"
status = "pending"
parent = "auth-api"  # Optional parent task

[[tasks]]
id = "auth-review"
description = "Review auth implementation"
status = "pending"
# No agent = unassigned

Task Status Values

StatusIconMeaning
pending⏳Not started
assignedπŸ“ŒGiven to an agent
runningπŸ”„In progress
completedβœ…Done
failed❌Error

Workflow

  1. Plan: tt plan --init β†’ edit tasks.toml
  2. Review: tt plan to see the plan
  3. Start: tt sync push to send to Redis
  4. Execute: Agents receive tasks
  5. Snapshot: tt sync pull to save state

Why Plan in a File?

  • Git history β€” Track how the plan evolved
  • Code review β€” Review task definitions in PRs
  • Templates β€” Reuse task structures across projects
  • Offline β€” Plan without starting Redis

Redis remains the source of truth at runtime; the file is for planning and version control.

See Also

tt sync

Synchronize tasks between tasks.toml and Redis.

Synopsis

tt sync [push|pull]

Description

The sync command moves task data between the tasks.toml file and Redis:

  • push: File β†’ Redis (deploy your plan)
  • pull: Redis β†’ File (snapshot current state)

Arguments

ArgumentDescription
pushSend tasks from tasks.toml to Redis (default)
pullSave Redis tasks to tasks.toml

Examples

Push Plan to Redis

After editing tasks.toml:

tt sync push

Output:

⬆️  Pushed 5 tasks from tasks.toml to Redis

Pull State from Redis

Save current Redis state to file:

tt sync pull

Output:

⬇️  Pulled 5 tasks from Redis to tasks.toml

Workflow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     push      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   tasks.toml    β”‚ ───────────►  β”‚     Redis       β”‚
β”‚   (planning)    β”‚               β”‚   (execution)   β”‚
β”‚                 β”‚ ◄───────────  β”‚                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     pull      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
        β”‚                                 β”‚
        β”‚                                 β”‚
        β–Ό                                 β–Ό
   Git tracked                     Fast queries
   Human readable                  Agent access
   Offline edits                   Real-time state

When to Use

Push (file β†’ Redis)

  • After editing tasks.toml
  • At the start of a work session
  • To reset task state

Pull (Redis β†’ file)

  • Before committing to git
  • To snapshot progress
  • To share state with team

Data Flow

Push Behavior

  1. Reads all tasks from tasks.toml
  2. Creates corresponding Task objects
  3. Stores each in Redis at tt:<town>:task:<id>
  4. Tags include plan:<id> for tracking

Pull Behavior

  1. (Currently) Initializes empty tasks.toml if missing
  2. (Future) Scans Redis for tt:<town>:task:* keys
  3. Converts to TaskEntry format
  4. Writes to tasks.toml

See Also

tt save

Save Redis state to AOF file for version control.

Synopsis

tt save

Description

Triggers Redis to compact and save its state to an AOF (Append Only File). This file can then be version controlled with git.

How It Works

  1. Sends BGREWRITEAOF command to Redis
  2. Redis compacts all operations into a single AOF file
  3. File is saved to .tt/redis.aof (configurable in tinytown.toml)

Example

tt save

Output:

πŸ’Ύ Saving Redis state...
   AOF rewrite triggered. File: ./.tt/redis.aof

   To version control Redis state:
   git add .tt/redis.aof
   git commit -m 'Save town state'

Version Control Workflow

# Work on your project
tt spawn backend
tt assign backend "Build the API"
# ... agents work ...

# Save state before committing code
tt save
git add .tt/redis.aof tasks.toml
git commit -m "API implementation complete"

# Later, restore on another machine
git pull
tt restore  # See instructions

AOF File Contents

The AOF file contains Redis commands to recreate state:

  • All agent registrations
  • All task states
  • All messages in inboxes
  • All activity logs

Config Options

In tinytown.toml:

[redis]
persist = true
aof_path = ".tt/redis.aof"

See Also

tt restore

Restore Redis state from AOF file.

Synopsis

tt restore

Description

Shows how to restore Redis state from a saved AOF file. This is useful when:

  • Starting on a new machine
  • Recovering from a crash
  • Continuing work from a git checkout

Example

tt restore

Output:

πŸ“‚ AOF file found: ./redis.aof

   To restore from AOF:
   1. Stop Redis if running
   2. Start Redis with: redis-server --appendonly yes --appendfilename redis.aof
   3. Redis will replay the AOF and restore state

   Or just run 'tt init' - it will use existing AOF if present.

Restore Workflow

Option 1: Manual Restore

# Stop any running Redis
pkill redis-server

# Start Redis with AOF enabled
redis-server --appendonly yes --dir . --appendfilename redis.aof --port 0 --unixsocket redis.sock &

# Redis replays AOF and restores state
tt status

If redis.aof exists in the town directory, tt init will automatically configure Redis to use it:

cd my-project
tt init  # Detects existing redis.aof
tt status  # State is restored!

What Gets Restored

  • βœ… Agent registrations (names, states, models)
  • βœ… Task states (pending, completed, etc.)
  • βœ… Message queues (inbox contents)
  • βœ… Activity logs (recent history)
  • βœ… Stop flags, urgent queues, etc.

See Also

tt start

Keep a connection open to the current town.

Synopsis

tt start [OPTIONS]

Description

Connects to an existing town and keeps the process running until Ctrl+C. This is useful for:

  1. Keeping an active town session open during development
  2. Maintaining a persistent connection for debugging
  3. Watching a town without spawning a new agent

Note: Towns automatically connect to central Redis when you run tt init or any command that needs it. This command is mainly for explicitly keeping the town connection open.

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Keep Town Running

tt start

Output:

πŸš€ Town connection open
^C
πŸ‘‹ Closing town connection...

With Specific Town

tt start --town ~/git/my-project

When to Use

Most operations don’t require tt start because:

  • tt init provisions the town and connects as needed
  • tt spawn connects and stays alive
  • tt status connects temporarily

Use tt start when you want to:

  • Keep a town session open without spawning agents
  • Debug connection issues
  • Manually control the town lifecycle

See Also

tt stop

Request all agents in the current town to stop gracefully.

Synopsis

tt stop [OPTIONS]

Description

tt stop requests all agents in the current town to stop gracefully. It does not shut down the shared central Redis instance, because that instance may be serving other towns.

Note: To fully reset a town, use tt reset instead.

Options

OptionShortDescription
--town <PATH>-tTown directory (default: .)
--verbose-vEnable verbose logging

Examples

Stop the Town

tt stop

Output:

πŸ›‘ Requested graceful stop for 3 agent(s) in town 'my-project'
   Agents will stop at the start of their next round.
   Central Redis remains available to other towns.
TaskCommand
Stop one agenttt kill <agent>
Reset all statett reset
Inspect remaining worktt inbox --all

Shared Redis Lifecycle

Central Redis in Tinytown is shared across towns:

  • Starts when a town first needs it
  • Remains available after tt stop
  • Should only be shut down through explicit Redis administration, not normal town cleanup

For persistent deployments, consider running Redis independently.

See Also

tt config

View or set global configuration.

Synopsis

tt config [KEY] [VALUE]

Description

Manages the global Tinytown configuration stored in ~/.tt/config.toml. This configuration applies to all towns unless overridden.

Arguments

ArgumentDescription
KEYConfig key to get or set (e.g., default_cli)
VALUEValue to set (if omitted, shows current value)

Options

OptionShortDescription
--verbose-vEnable verbose logging

Available Keys

KeyDescriptionValues
default_cliDefault AI CLI for agentsclaude, auggie, codex, aider, gemini, copilot, cursor
agent_clis.<name>Custom CLI command for a named CLIAny command string

Examples

View All Configuration

tt config

Output:

βš™οΈ  Global config: /Users/me/.tt/config.toml

default_cli = "claude"

[agent_clis]
my-custom = "custom-ai --mode agent"

Available CLIs: claude, auggie, codex, aider, gemini, copilot, cursor

Get a Specific Value

tt config default_cli

Output:

claude

Set Default CLI

tt config default_cli auggie

Output:

βœ… Set default_cli = "auggie"
   Saved to: /Users/me/.tt/config.toml

Add Custom CLI

tt config agent_clis.my-ai "my-ai-cli --flag"

Configuration Precedence

  1. CLI argument (--cli)
  2. Town config (tinytown.toml)
  3. Global config (~/.tt/config.toml)
  4. Built-in default (claude)

Config File Format

~/.tt/config.toml:

default_cli = "claude"

[agent_clis]
my-custom = "custom-ai --mode agent"

See Also

tt towns

List all registered towns.

Synopsis

tt towns [OPTIONS]

Description

Displays all towns registered in ~/.tt/towns.toml. Towns are automatically registered when initialized with tt init. For each town, shows:

  • Town name
  • Connection status
  • Agent count (if online)
  • Path on disk

Options

OptionShortDescription
--verbose-vEnable verbose logging

Examples

List Registered Towns

tt towns

Output:

🏘️  Registered Towns (3):

   my-project - [OK] 3 agents (2 active)
      πŸ“‚ /Users/me/git/my-project

   feature-branch - [OK] 1 agents (1 active)
      πŸ“‚ /Users/me/git/my-project

   old-project - [OFFLINE]
      πŸ“‚ /Users/me/git/old-project

Status Indicators

StatusMeaning
[OK] N agents (M active)Town online with Redis connection
[OFFLINE]Town exists but Redis not running
⚠️ (no config)Path exists but no tinytown.toml
❌ (path not found)Directory no longer exists

No Towns Registered

tt towns

Output:

πŸ“ No towns registered yet.
   Run 'tt init' in a directory to register a town.

Registration

Towns are automatically registered when created:

cd ~/git/my-new-project
tt init --name my-new-project
# Town is now registered in ~/.tt/towns.toml

Registry File

Towns are tracked in ~/.tt/towns.toml:

[[towns]]
name = "my-project"
path = "/Users/me/git/my-project"

[[towns]]
name = "feature-branch"
path = "/Users/me/git/my-project"

See Also

  • tt init β€” Initialize and register a new town
  • tt status β€” Detailed status of current town

tt auth

Authentication management for townhall.

Synopsis

tt auth <SUBCOMMAND>

Description

Manages authentication credentials for the townhall REST API and MCP servers.

Subcommands

gen-key

Generate a new API key and its hash:

tt auth gen-key

Examples

Generate API Key

tt auth gen-key

Output:

πŸ” Generated new API key

API Key (store securely, shown only once):
tt_abc123def456...

API Key Hash (add to tinytown.toml):
$argon2id$v=19$m=19456,t=2,p=1$...

Add to your tinytown.toml:

  [townhall.auth]
  mode = "api_key"
  api_key_hash = "$argon2id$v=19$..."

Then use the API key with townhall:
  curl -H 'Authorization: Bearer tt_abc12...' http://localhost:8080/v1/status

Configuration

After generating a key, add to tinytown.toml:

[townhall]
bind = "127.0.0.1"
rest_port = 8787

[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$m=19456,t=2,p=1$..."

Using the API Key

With curl

curl -H "Authorization: Bearer tt_abc123..." http://localhost:8787/v1/status

In scripts

export TINYTOWN_API_KEY="tt_abc123..."
curl -H "Authorization: Bearer $TINYTOWN_API_KEY" http://localhost:8787/v1/agents

Security Best Practices

  1. Never commit API keys β€” Add to .env or secrets manager
  2. Use environment variables β€” Don’t hardcode in scripts
  3. Rotate keys periodically β€” Generate new keys with tt auth gen-key
  4. Consider OIDC β€” For production, use OIDC authentication

See Also

tt migrate

Migrate Redis keys from old format to town-isolated format.

Synopsis

tt migrate [OPTIONS]

Description

The migrate command handles backward compatibility when upgrading to a version of Tinytown that uses town-isolated Redis keys. Older versions stored keys in the format tt:type:id, while newer versions use tt:<town_name>:type:id to support multiple towns sharing the same Redis instance.

This migration is:

  • Safe: Preview with --dry-run before committing
  • Idempotent: Running multiple times has no effect after initial migration
  • Atomic: Each key is renamed atomically

Options

OptionDescription
--dry-runPreview migration without making changes
--forceSkip confirmation prompt

Key Formats

Old Format (pre-isolation)

tt:agent:<uuid>
tt:inbox:<uuid>
tt:urgent:<uuid>
tt:task:<uuid>
tt:activity:<uuid>
tt:stop:<uuid>
tt:backlog

New Format (town-isolated)

tt:<town_name>:agent:<uuid>
tt:<town_name>:inbox:<uuid>
tt:<town_name>:urgent:<uuid>
tt:<town_name>:task:<uuid>
tt:<town_name>:activity:<uuid>
tt:<town_name>:stop:<uuid>
tt:<town_name>:backlog

Examples

Preview Migration (Dry Run)

tt migrate --dry-run

Output:

πŸ” Migration Preview (dry run)
   Town: my-project

   Keys to migrate:
   tt:agent:abc123 β†’ tt:my-project:agent:abc123
   tt:inbox:abc123 β†’ tt:my-project:inbox:abc123
   tt:task:def456 β†’ tt:my-project:task:def456

   Total: 3 key(s) would be migrated

   Run 'tt migrate' (without --dry-run) to perform migration.

Perform Migration (Interactive)

tt migrate

Prompts for confirmation before proceeding.

Perform Migration (Force)

tt migrate --force

Skips the confirmation prompt. Useful for automation/CI.

When to Use

You need to run migration if:

  1. Upgrading from an older version: If you’re upgrading from a version that didn’t have town isolation, your existing keys need migration.

  2. Seeing β€œno migration needed” message: If the command reports no migration needed, your keys are already in the new format.

  3. Sharing Redis between towns: Town isolation allows multiple Tinytown projects to use the same Redis instance without key conflicts.

Recovery

If migration fails partway through:

  1. Check which keys failed in the output
  2. Investigate the specific errors
  3. Run tt migrate again (idempotent - already-migrated keys won’t be re-migrated)

See Also

tt mission

Autonomous multi-issue mission mode commands.

Synopsis

tt mission start [OPTIONS]
tt mission status [OPTIONS]
tt mission resume <RUN_ID>
tt mission stop <RUN_ID> [OPTIONS]
tt mission list [OPTIONS]

Description

Mission mode enables durable, dependency-aware orchestration of multiple GitHub issues with automatic PR/CI monitoring. Use these commands to start, monitor, and control missions.

Subcommands

tt mission start

Start a new mission with one or more objectives.

tt mission start --issue <ISSUE>... [--doc <PATH>...] [OPTIONS]

Options:

OptionShortDescription
--issue <ISSUE>-iGitHub issue number or URL (repeatable)
--doc <PATH>-dDocument path as objective (repeatable)
--max-parallel <N>Max parallel work items (default: 2)
--no-reviewerDisable reviewer requirement

Issue Formats:

  • 23 β€” Issue #23 in current repo
  • owner/repo#23 β€” Fully qualified issue
  • https://github.com/owner/repo/issues/23 β€” Full URL

Examples:

# Start with single issue
tt mission start --issue 23

# Multiple issues
tt mission start -i 23 -i 24 -i 25

# Cross-repo issues
tt mission start --issue my-org/other-repo#42

# Include a design doc
tt mission start --issue 23 --doc docs/design.md

# Allow more parallelism
tt mission start -i 23 -i 24 --max-parallel 4

# Skip reviewer gate
tt mission start -i 23 --no-reviewer

tt mission status

Show status of missions.

tt mission status [--run <ID>] [--work] [--watch]

Options:

OptionShortDescription
--run <ID>-rShow specific mission by ID
--workShow detailed work item status
--watchShow watch items (PR/CI monitors)

Examples:

# Show all active missions
tt mission status

# Specific mission with work items
tt mission status --run abc123 --work

# Include watch items
tt mission status -r abc123 --work --watch

Output:

🎯 Mission Status
   ID: abc123-def456-...
   State: πŸš€ Running
   Created: 2024-01-15 10:30:00 UTC
   Updated: 2024-01-15 11:45:00 UTC

πŸ“‹ Objectives: 3
   - redis-field-engineering/tinytown#23
   - redis-field-engineering/tinytown#24
   - redis-field-engineering/tinytown#25

βš™οΈ  Policy:
   Max parallel: 2
   Reviewer required: true
   Auto-merge: false
   Watch interval: 180s

πŸ“¦ Work Items: 5
   πŸ”΅ ready    Issue #23: Implement auth flow
   πŸ”„ running  Issue #24: Add rate limiting (β†’ backend)
   ⏳ pending  Issue #25: Write tests

tt mission resume

Resume a stopped or blocked mission.

tt mission resume <RUN_ID>

Examples:

tt mission resume abc123-def456-...

tt mission stop

Stop an active mission.

tt mission stop <RUN_ID> [--force]

Options:

OptionDescription
--forceForce stop without graceful cleanup

Examples:

# Graceful stop (can be resumed)
tt mission stop abc123

# Force stop (cannot be resumed)
tt mission stop abc123 --force

tt mission list

List all missions.

tt mission list [--all]

Options:

OptionDescription
--allInclude completed/failed missions

Examples:

# Active missions only
tt mission list

# All missions including completed
tt mission list --all

Work Item States

StateEmojiDescription
Pending⏳Waiting for dependencies
ReadyπŸ”΅Dependencies satisfied, can be assigned
AssignedπŸ“ŒAssigned to an agent
RunningπŸ”„Agent is actively working
Blocked🚧Waiting on external event
Doneβœ…Completed successfully

See Also

Migration Guide: From Gastown to Tinytown

Tried Gastown and found it overwhelming? You’re not alone. Here’s how to get the same results with Tinytown.

Why You’re Here

Gastown is powerful but complex:

  • 50+ concepts to understand
  • Multiple agent types (Mayor, Deacon, Witness, Polecats, etc.)
  • Two-level database architecture (Town beads + Rig beads)
  • Daemon processes, patrols, and recovery mechanisms
  • Hours to set up, days to understand

Tinytown gives you 90% of the value with 10% of the complexity.

Quick Comparison

What you wantedGastown wayTinytown way
Start orchestrating10+ commandstt init
Create an agentComplex Polecat setuptt spawn worker
Assign workgt sling + convoystt assign worker "task"
Check statusgt convoy list, gt feedtt status
Understand itRead 300K linesRead 1,400 lines

Concept Mapping

Gastown β†’ Tinytown

Gastown ConceptTinytown Equivalent
TownTown βœ“ (same name!)
MayorYou (or your code)
PolecatAgent
BeadsTasks (simpler)
ConvoyTask groups (manual)
HookAgent’s inbox
MailMessages
WitnessYour monitoring code
RefineryYour CI/CD

What Tinytown Doesn’t Have

Deliberately omitted for simplicity:

Gastown FeatureTinytown Alternative
Dolt SQLRedis (simpler)
Git-backed beadsRedis persistence
Two-level DBSingle Redis instance
Daemon processesYour process manages
Auto-recoveryManual retry logic
FormulasWrite code directly
MEOW orchestrationDirect API calls

Migration Steps

Step 1: Install Tinytown

git clone https://github.com/redis-field-engineering/tinytown.git
cd tinytown
cargo install --path .

Step 2: Initialize Your Project

Gastown:

# Multiple steps, daemon processes, config files...
gt boot
gt daemon start
# Configure rig, beads, etc.

Tinytown:

mkdir my-project && cd my-project
tt init --name my-project
# Done!

Step 3: Create Agents

Gastown:

# Configure polecat pools, spawn through Mayor...
gt mayor attach
# "Create a polecat for backend work"

Tinytown:

tt spawn backend --model claude
tt spawn frontend --model auggie
tt spawn reviewer --model codex

Step 4: Assign Work

Gastown:

# Create beads, slinging, convoys...
bd create --type task --title "Build API"
gt sling gt-abc12 gastown/polecats/Toast
gt convoy create "Feature X" gt-abc12

Tinytown:

tt assign backend "Build the REST API"
tt assign frontend "Build the UI"

Step 5: Monitor Progress

Gastown:

gt convoy list
gt convoy status hq-cv-abc
gt feed
gt dashboard  # requires tmux

Tinytown:

tt status
tt list

Code Migration

Gastown Pattern: Tell the Mayor

# Gastown: Complex orchestration
# You tell Mayor what you want, Mayor figures out the rest
gt mayor attach
> Build a user authentication system with login, signup, and password reset
# Mayor creates convoy, assigns polecats, tracks progress...

Tinytown Pattern: Direct Control

#![allow(unused)]
fn main() {
// Tinytown: You're in control
let town = Town::connect(".").await?;

// Create your team
let designer = town.spawn_agent("designer", "claude").await?;
let backend = town.spawn_agent("backend", "auggie").await?;
let frontend = town.spawn_agent("frontend", "codex").await?;

// Assign work explicitly
designer.assign(Task::new("Design auth API schema")).await?;
wait_for_idle(&designer).await?;

backend.assign(Task::new("Implement auth endpoints")).await?;
frontend.assign(Task::new("Build login/signup UI")).await?;

// Wait for both
tokio::join!(
    wait_for_idle(&backend),
    wait_for_idle(&frontend)
);
}

When to Use Tinytown vs Gastown

Use Tinytown When:

βœ… You want to understand the system
βœ… You need something working in 30 seconds
βœ… You’re coordinating 1-5 agents
βœ… You want to write your own orchestration logic
βœ… Simple is better than feature-rich

Use Gastown When:

βœ… You need 20+ concurrent agents
βœ… You need git-backed work history
βœ… You need automatic crash recovery
βœ… You need cross-project coordination
βœ… You have time to learn the system

Common Questions

Q: Can I use both? A: Yes! Start with Tinytown for simplicity. If you outgrow it, Gastown’s there.

Q: Is Tinytown production-ready? A: For small teams and projects, yes. For enterprise scale, consider Gastown.

Q: Can I migrate Tinytown work to Gastown? A: Tasks are JSON. You could write a converter to Beads format.

Q: Does Tinytown support everything Gastown does? A: No, and that’s the point. Tinytown does less, but what it does is simple.

Concept Mapping: Gastown β†’ Tinytown

A detailed translation guide for Gastown users.

Agent Taxonomy

Gastown’s Agent Zoo

Gastown has 8 agent types across two levels:

Town-Level:

AgentRoleTinytown Equivalent
MayorGlobal coordinatorYour orchestration code
DeaconDaemon, health monitoringYour process + monitoring
BootDeacon watchdogExternal health check
DogsInfrastructure helpersBackground tasks

Rig-Level:

AgentRoleTinytown Equivalent
WitnessMonitors polecatsStatus polling loop
RefineryMerge queue processorCI/CD integration
PolecatsWorkersAgents
CrewHuman workspacesN/A (you’re the human)

Tinytown’s Simplicity

Tinytown has 2 agent types:

AgentRole
SupervisorWell-known ID for coordination
WorkerDoes the actual work

Everything else? You write it explicitly.

Work Tracking

Gastown Beads

Beads are git-backed structured records:

ID: gt-abc12
Type: task
Title: Implement login API
Status: in_progress
Priority: P1
Created: 2024-03-01
Assigned: gastown/polecats/Toast
Parent: gt-xyz99 (epic)
Dependencies: [gt-def34, gt-ghi56]

Features:

  • Stored in Dolt SQL
  • Version controlled
  • Two-level (Town + Rig)
  • Rich schema with dependencies
  • Prefix-based namespacing

Tinytown Tasks

Tasks are defined in tasks.toml:

[[tasks]]
id = "login-api"
description = "Implement login API"
agent = "backend"
status = "pending"
tags = ["auth", "api"]

Or assigned via CLI:

tt assign backend "Implement login API"

Features:

  • Stored in Redis
  • Defined in TOML (version-controlled)
  • Single level
  • Minimal schema
  • Tags for organization

Translation

Beads FeatureTinytown Approach
Priority (P0-P4)Use tags: ["P1"]
Type (task/bug/feature)Use tags: ["bug"]
DependenciesManual coordination
Parent/childparent_id field
Status historyNot built-in (log it yourself)

Coordination Mechanisms

Gastown: Convoys

Convoys track batches of related work:

gt convoy create "User Auth Feature" gt-abc12 gt-def34 gt-ghi56
gt convoy status hq-cv-xyz

Features:

  • Auto-created by Mayor
  • Tracks multiple beads
  • Lifecycle: OPEN β†’ LANDED β†’ CLOSED
  • Event-driven completion detection

Tinytown: Manual Grouping

Use parent tasks or tags in tasks.toml:

# Option 1: Parent tasks
[[tasks]]
id = "auth-feature"
description = "User Auth Feature"
status = "pending"

[[tasks]]
id = "login"
description = "Login flow"
parent = "auth-feature"
agent = "backend"
status = "pending"

[[tasks]]
id = "signup"
description = "Signup flow"
parent = "auth-feature"
agent = "backend"
status = "pending"

# Option 2: Tags for grouping
[[tasks]]
id = "login-tagged"
description = "Login flow"
tags = ["auth-feature"]
agent = "backend"
status = "pending"

Gastown: Hooks

Hooks are the assignment mechanism:

Polecat has hook β†’ Hook has pinned bead β†’ Polecat MUST work on it

The β€œGUPP Principle”: If work is on your hook, you run it immediately.

Tinytown: Inboxes

Messages go to agent inboxes (Redis lists):

Agent has inbox β†’ Messages queued β†’ Agent polls/blocks for messages

You control when and how agents process work.

Communication

Gastown: Mail Protocol

Messages are beads of type message:

# Check mail
gt mail check

# Types: POLECAT_DONE, MERGE_READY, REWORK_REQUEST, etc.

Complex routing through beads system.

Tinytown: Direct Messages

Messages are transient, stored in Redis. Send via CLI:

# Send a message to another agent
tt send reviewer "Task complete. Ready for review."

# Send an urgent message
tt send reviewer --urgent "Critical issue found!"

Direct, simple, explicit.

State Persistence

Gastown: Multi-Layer

  1. Git worktrees - Sandbox persistence
  2. Beads ledger - Work state (Dolt SQL)
  3. Hooks - Work assignment
  4. State files - Runtime state (JSON)

Tinytown: Redis

Everything in Redis with town-isolated keys:

  • tt:<town>:agent:<id> - Agent state
  • tt:<town>:task:<id> - Task state
  • tt:<town>:inbox:<id> - Message queues

Enable Redis persistence (RDB/AOF) for durability. Multiple towns can share the same Redis instance.

Recovery

Gastown: Automatic

  • Witness patrols detect stalled polecats
  • Deacon monitors system health
  • Boot watches Deacon
  • Hooks ensure work resumes on restart

Tinytown: Manual

You implement recovery via CLI:

# Check agent health
tt status
tt list

# If an agent is in error state, respawn it
tt prune
tt spawn worker-1 --model claude

# Reassign failed tasks
tt assign worker-1 "Retry the failed operation"

When Tinytown Falls Short

Gastown features you might miss:

FeatureWhy It’s UsefulTinytown Workaround
Automatic recoveryHands-off operationWrite recovery loops
Git-backed historyAudit trailLog to files
Dependency graphsComplex workflowsManual ordering
Cross-rig workMulti-repo coordinationRun multiple towns
DashboardVisual monitoringCLI + custom tooling

If you find yourself building these features, consider whether Gastown’s complexity is justified for your use case.

Why Tinytown?

A philosophical guide to choosing simplicity.

The Problem with Complex Systems

Gastown is impressive engineering. It has:

  • Automatic crash recovery
  • Git-backed work history
  • Multi-agent coordination
  • Visual dashboards
  • Sophisticated orchestration

But it also has:

  • 317,898 lines of code to understand
  • 50+ concepts to learn
  • Hours of setup before your first task
  • Days of learning before you’re productive

The Tinytown Philosophy

β€œMake it work. Make it simple. Stop.”

Tinytown takes a different approach:

1. You Don’t Need Most Features

90% of multi-agent orchestration is:

  1. Create agents
  2. Assign tasks
  3. Wait for completion
  4. Check results

Tinytown does exactly this. Nothing more.

2. Complexity Compounds

Every feature adds:

  • Code to maintain
  • Concepts to learn
  • Bugs to fix
  • Documentation to write

Tinytown has 1,448 lines of code. You can read the entire codebase in an afternoon.

3. Explicit is Better Than Magic

Gastown’s Mayor β€œfigures things out” for you:

gt mayor attach
> Build an authentication system
# Mayor creates convoy, spawns agents, distributes work...

Tinytown makes you say what you want:

#![allow(unused)]
fn main() {
architect.assign(Task::new("Design auth system")).await?;
developer.assign(Task::new("Implement auth")).await?;
tester.assign(Task::new("Test auth")).await?;
}

More typing, but you know exactly what’s happening.

4. Recovery is Your Responsibility

Gastown: Witness patrols, Deacon monitors, Boot watches Deacon…

Tinytown: You write a loop:

#![allow(unused)]
fn main() {
if agent.state == AgentState::Error {
    respawn_and_retry(agent).await?;
}
}

Is this more work? Yes. Is it simpler to understand? Also yes.

The Tradeoffs

What You Gain

βœ… Understanding β€” You know how it works
βœ… Speed β€” Running in 30 seconds
βœ… Debuggability β€” 1,400 lines to read
βœ… Control β€” You decide everything
βœ… Simplicity β€” 5 concepts total

What You Lose

❌ Automation β€” You write recovery logic
❌ Scale β€” Designed for 1-10 agents
❌ History β€” No git-backed audit trail
❌ Visualization β€” No built-in dashboard
❌ Federation β€” Single machine focus

When to Choose What

Choose Tinytown If:

  • You’re learning agent orchestration
  • You want to ship something today
  • You have 1-5 agents
  • You prefer explicit over magic
  • You value understanding over features

Choose Gastown If:

  • You need 20+ concurrent agents
  • You need audit trails
  • You need automatic recovery
  • You need cross-project coordination
  • You have time to learn the system

Choose Both If:

Start with Tinytown. Learn the patterns. If you outgrow it, Gastown will make more sense because you understand what problems it’s solving.

A Practical Test

Ask yourself:

  1. How many agents do I need?

    • 1-5: Tinytown
    • 10+: Consider Gastown
  2. How important is automatic recovery?

    • Nice to have: Tinytown
    • Critical: Gastown
  3. How much time do I have?

    • Minutes: Tinytown
    • Days/weeks: Either
  4. Do I want to understand the system?

    • Yes: Tinytown
    • No, just make it work: Gastown (eventually)

The Honest Answer

Tinytown exists because Gastown is hard to start with.

If you’ve bounced off Gastown, Tinytown gets you running. You can always graduate to Gastown laterβ€”and you’ll appreciate its features more because you’ve felt the pain of not having them.

Start simple. Add complexity only when you need it.

β€œPerfection is achieved not when there is nothing more to add, but when there is nothing left to take away.” β€” Antoine de Saint-ExupΓ©ry

Custom Models

Add your own AI model configurations to Tinytown.

Model Configuration

Models are defined in tinytown.toml:

[agent_clis.claude]
name = "claude"
command = "claude --print"

[agent_clis.my-custom-model]
name = "my-custom-model"
command = "/path/to/my/agent --config ./agent.yaml"
workdir = "/path/to/working/dir"

[agent_clis.my-custom-model.env]
API_KEY = "secret"
MODEL_VERSION = "v2"

Model Properties

PropertyRequiredDescription
nameYesIdentifier for --model flag
commandYesShell command to run the agent
workdirNoWorking directory for the command
envNoEnvironment variables

Example: Local LLM

[agent_clis.local-llama]
name = "local-llama"
command = "llama-cli --model llama-3-70b --prompt-file task.txt"
workdir = "~/.local/share/llama"

[agent_clis.local-llama.env]
CUDA_VISIBLE_DEVICES = "0"

Usage:

tt spawn worker-1 --model local-llama

Example: Custom Script

Create a wrapper script:

#!/bin/bash
# ~/bin/my-agent.sh

# Read task from stdin or argument
TASK="$1"

# Your custom agent logic
python3 ~/agents/my_agent.py --task "$TASK"

Configure:

[agent_clis.my-agent]
name = "my-agent"
command = "~/bin/my-agent.sh"

Example: Docker Container

[agent_clis.docker-agent]
name = "docker-agent"
command = "docker run --rm -v $(pwd):/workspace my-agent:latest"

Programmatic Model Registration

In Rust code:

#![allow(unused)]
fn main() {
use tinytown::agent::AgentModel;

// Create custom model
let model = AgentModel::new("my-model", "my-command --flag")
    .with_workdir("/path/to/workdir")
    .with_env("API_KEY", "secret");

// Models are used when spawning
town.spawn_agent("worker", "my-model").await?;
}

Best Practices

  1. Use absolute paths β€” Relative paths may break
  2. Handle stdin/stdout β€” Agents should read tasks from messages
  3. Set timeouts β€” Don’t let agents run forever
  4. Log output β€” Direct to logs/ directory
  5. Test locally first β€” Before adding to config

Troubleshooting

Command Not Found

# Check the command works directly
/path/to/my/agent --help

Environment Variables Not Set

# Debug by adding echo
"command": "env && /path/to/agent"

Working Directory Issues

Use absolute paths:

workdir = "/Users/you/agents"

Not:

workdir = "./agents"  # May not resolve correctly

Redis Configuration

Tinytown uses Redis for message passing and state storage. Here’s how to configure and optimize it.

Default Setup

By default, Tinytown:

  1. Starts a local Redis server
  2. Uses a Unix socket at ./redis.sock
  3. Disables TCP (port 0)
  4. Runs in-memory only

Unix Socket vs TCP

Unix Socket (Default)

{
  "redis": {
    "use_socket": true,
    "socket_path": "redis.sock"
  }
}

Pros:

  • ~10x faster latency (~0.1ms vs ~1ms)
  • No network overhead
  • No port conflicts

Cons:

  • Local only (same machine)
  • File permissions matter

TCP Connection

[redis]
use_socket = false
host = "127.0.0.1"
port = 6379
bind = "127.0.0.1"

Use for:

  • Remote Redis servers
  • Docker containers
  • Networked deployments

Security

Password Authentication

Enable password authentication for TCP connections:

[redis]
use_socket = false
host = "127.0.0.1"
port = 6379
password = "your-secret-password"

Note: Password is required when binding to non-localhost addresses.

TLS Encryption

Enable TLS for encrypted connections:

[redis]
use_socket = false
host = "redis.example.com"
port = 6379
password = "secret123"
tls_enabled = true
tls_cert = "/etc/ssl/redis.crt"
tls_key = "/etc/ssl/redis.key"
tls_ca_cert = "/etc/ssl/ca.crt"

When TLS is enabled:

  • Tinytown uses the rediss:// URL scheme
  • The non-TLS port is disabled
  • Certificates are passed to Redis server on startup

Security Recommendations

  1. Use Unix sockets for local development - Most secure, no network exposure
  2. Bind to localhost (127.0.0.1) when possible
  3. Always use password for non-localhost bindings
  4. Enable TLS for production and remote connections
  5. Use environment variables for passwords in CI/CD

Connecting to External Redis

Use an existing Redis server instead of starting one:

[redis]
use_socket = false
host = "redis.example.com"
port = 6379
password = "your-password"

Tinytown will connect without starting a new server (external hosts are auto-detected).

Persistence

By default, Redis runs in-memory. Data is lost on restart.

Enable RDB Snapshots

redis-cli -s ./redis.sock CONFIG SET save "60 1"

Saves every 60 seconds if at least 1 key changed.

Enable AOF (Append Only File)

redis-cli -s ./redis.sock CONFIG SET appendonly yes
redis-cli -s ./redis.sock CONFIG SET appendfsync everysec

Logs every write. More durable but slower.

# Save every 5 min if 1+ changes, every 1 min if 100+ changes
redis-cli CONFIG SET save "300 1 60 100"

# Enable AOF with fsync every second
redis-cli CONFIG SET appendonly yes
redis-cli CONFIG SET appendfsync everysec

Memory Management

Set Memory Limit

redis-cli CONFIG SET maxmemory 256mb
redis-cli CONFIG SET maxmemory-policy allkeys-lru

Monitor Memory

redis-cli INFO memory

Key Patterns

Tinytown uses town-isolated key patterns:

PatternTypePurpose
tt:<town>:inbox:<uuid>ListAgent message queues
tt:<town>:agent:<uuid>StringAgent state (JSON)
tt:<town>:task:<uuid>StringTask state (JSON)
tt:broadcastPub/SubBroadcast channel

This town-isolated format allows multiple Tinytown projects to share the same Redis instance. See tt migrate for upgrading from older key formats.

Debugging

Connect to Redis

# Unix socket
redis-cli -s ./redis.sock

# TCP
redis-cli -h 127.0.0.1 -p 6379

Useful Commands

# List all tinytown keys for a town
KEYS tt:<town_name>:*

# Check inbox length
LLEN tt:<town_name>:inbox:550e8400-...

# View agent state
GET tt:<town_name>:agent:550e8400-...

# Monitor all operations
MONITOR

# Get server info
INFO

Clear All Data

# Danger: Deletes everything!
redis-cli -s ./redis.sock FLUSHALL

Docker Deployment

# docker-compose.yml
version: '3'
services:
  redis:
    image: redis:8
    ports:
      - "6379:6379"
    volumes:
      - redis-data:/data
    command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}

volumes:
  redis-data:

Then configure Tinytown:

[redis]
use_socket = false
host = "localhost"
port = 6379
password = "your-docker-redis-password"

Performance Tuning

For Low Latency

  • Use Unix sockets
  • Disable persistence (if acceptable)
  • Use local SSD

For Durability

  • Enable AOF with everysec
  • Use persistent storage
  • Set up replication (advanced)

For High Throughput

  • Increase tcp-backlog
  • Tune timeout and tcp-keepalive
  • Use pipelining in code

Townhall Control Plane

Townhall is the HTTP control plane for Tinytown. It exposes all Tinytown operations via REST API and MCP (Model Context Protocol), enabling remote management from web clients, mobile apps, and AI tools.

Quick Start

# Start townhall daemon (REST API on port 8787)
townhall

# With verbose logging
townhall --verbose

# Custom port
townhall rest --port 9000

# For a specific town
townhall --town /path/to/project

Modes

Townhall supports three modes:

ModeCommandTransportUse Case
REST APItownhall restHTTP/JSONWeb/mobile clients, scripts
MCP stdiotownhall mcp-stdiostdin/stdoutIDE extensions, Claude Desktop
MCP HTTPtownhall mcp-httpHTTP/SSEBrowser MCP clients

REST API (Default)

townhall rest --bind 127.0.0.1 --port 8787

All CLI operations are available as HTTP endpoints:

# Get status
curl http://localhost:8787/v1/status

# List agents
curl http://localhost:8787/v1/agents

# Spawn agent
curl -X POST http://localhost:8787/v1/agents \
  -H "Content-Type: application/json" \
  -d '{"name": "worker-1", "cli": "claude"}'

# Assign task
curl -X POST http://localhost:8787/v1/tasks/assign \
  -H "Content-Type: application/json" \
  -d '{"agent": "worker-1", "task": "Fix the bug"}'

MCP Mode

For AI tool integration (Claude Desktop, VS Code, etc.):

# stdio transport (for Claude Desktop)
townhall mcp-stdio

# HTTP/SSE transport (for web clients)
townhall mcp-http --port 8788

See MCP Interface for detailed MCP documentation.

API Reference

Endpoints

EndpointMethodScopeDescription
/healthzGETpublicHealth check
/v1/townGETtown.readGet town info
/v1/statusGETtown.readGet full status
/v1/agentsGETtown.readList agents
/v1/agentsPOSTagent.manageSpawn agent
/v1/agents/{agent}/killPOSTagent.manageStop agent
/v1/agents/{agent}/restartPOSTagent.manageRestart agent
/v1/agents/prunePOSTagent.managePrune dead agents
/v1/tasks/assignPOSTtown.writeAssign task
/v1/tasks/pendingGETtown.readList pending tasks
/v1/backlogGETtown.readList backlog
/v1/backlogPOSTtown.writeAdd to backlog
/v1/backlog/{task_id}/claimPOSTtown.writeClaim backlog task
/v1/backlog/assign-allPOSTtown.writeAssign all backlog
/v1/backlog/{task_id}DELETEtown.writeRemove backlog task
/v1/messages/sendPOSTtown.writeSend message
/v1/agents/{agent}/inboxGETtown.readGet inbox
/v1/recoverPOSTagent.manageRecover orphaned agents
/v1/reclaimPOSTagent.manageReclaim tasks

See the OpenAPI spec for complete API documentation.

Error Handling

Errors follow RFC 7807 Problem Details:

{
  "type": "https://tinytown.dev/errors/404",
  "title": "Not Found",
  "status": 404,
  "detail": "Agent 'worker-99' not found"
}

Configuration

Configure townhall in tinytown.toml:

[townhall]
bind = "127.0.0.1"      # Bind address (default: 127.0.0.1)
rest_port = 8787        # REST API port (default: 8787)
request_timeout_ms = 30000  # Request timeout (default: 30s)

For production deployments, enable Authentication:

[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$..."  # Use: tt generate-api-key

Security

Startup Safety Rules

Townhall enforces security by default:

  1. Non-loopback binding requires authentication - Cannot bind to 0.0.0.0 with auth.mode = "none"
  2. Warnings for API key on non-loopback - Recommends OIDC for production
  3. TLS/mTLS validation - Fails fast on invalid certificate configuration

Best Practices

  • Use 127.0.0.1 for local development
  • Enable API key or OIDC authentication for any network exposure
  • Enable TLS for production deployments
  • Use mTLS for service-to-service communication

Townhall REST API

townhall is Tinytown’s REST control plane daemon. It exposes the same orchestration services used by the tt CLI over HTTP.

Start the Server

# From a town directory
townhall

# Explicit REST mode
townhall rest

# Override bind/port
townhall rest --bind 127.0.0.1 --port 8080

Defaults come from tinytown.toml:

[townhall]
bind = "127.0.0.1"
rest_port = 8080
request_timeout_ms = 30000

Endpoint Groups

The router is split into public/read/write/management groups:

  • Public: GET /healthz
  • Read (town.read): GET /v1/town, GET /v1/status, GET /v1/agents, GET /v1/tasks/pending, GET /v1/backlog, GET /v1/agents/{agent}/inbox
  • Write (town.write): POST /v1/tasks/assign, POST /v1/backlog, POST /v1/backlog/{task_id}/claim, POST /v1/backlog/assign-all, DELETE /v1/backlog/{task_id}, POST /v1/messages/send
  • Agent management (agent.manage): POST /v1/agents, POST /v1/agents/{agent}/kill, POST /v1/agents/{agent}/restart, POST /v1/agents/prune, POST /v1/recover, POST /v1/reclaim

Authentication

townhall supports three auth modes in config:

  • none (default): local/no-auth mode
  • api_key: API key via Authorization: Bearer <key> or X-API-Key
  • oidc: declared in config, not yet implemented in middleware

Generate an API key + Argon2 hash:

tt auth gen-key

Then configure:

[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$..."
api_key_scopes = ["town.read", "town.write", "agent.manage"]

Example request:

curl -H "Authorization: Bearer $TOWNHALL_API_KEY" \
  http://127.0.0.1:8080/v1/status

Startup Safety Rules

At startup, townhall fails fast when:

  • Binding to a non-loopback address with auth.mode = "none"
  • TLS is enabled but cert_file/key_file are missing or invalid
  • mTLS is required but ca_file is missing or invalid

OpenAPI Spec

The REST contract is documented in:

  • docs/openapi/townhall-v1.yaml

You can load this file in Swagger UI, Stoplight, or Redoc for interactive exploration.

Townhall MCP Server

Tinytown includes an MCP server in the townhall binary for LLM/tooling integrations.

Start MCP

# MCP over stdio (for local MCP clients)
townhall mcp-stdio

# MCP over HTTP/SSE
townhall mcp-http

# Override bind/port (default port = rest_port + 1)
townhall mcp-http --bind 127.0.0.1 --port 8081

Registered MCP Tools

Read tools:

  • town.get_status
  • agent.list
  • agent.inbox
  • task.list_pending
  • backlog.list

Write tools:

  • task.assign
  • message.send
  • backlog.add
  • backlog.claim
  • backlog.assign_all
  • backlog.remove

Agent-management/recovery tools:

  • agent.spawn
  • agent.kill
  • agent.restart
  • agent.prune
  • recovery.recover_agents
  • recovery.reclaim_tasks

Tool responses are JSON payloads wrapped as:

{
  "success": true,
  "data": {},
  "error": null
}

Registered MCP Resources

Static resources:

  • tinytown://town/current
  • tinytown://agents
  • tinytown://backlog

Resource templates:

  • tinytown://agents/{agent_name}
  • tinytown://tasks/{task_id}

Registered MCP Prompts

  • conductor.startup_context
  • agent.role_hint (agent_name required, tags optional)

Notes

  • MCP tools call the same Tinytown service layer used by CLI and REST.
  • mcp-http uses Tower MCP’s HTTP/SSE transport and follows standard MCP message semantics.

MCP Interface

Tinytown exposes a full Model Context Protocol (MCP) interface, allowing AI tools like Claude Desktop, Cursor, and other MCP clients to directly orchestrate agent towns.

Quick Start

Claude Desktop Integration

Add to ~/.config/claude/claude_desktop_config.json:

{
  "mcpServers": {
    "tinytown": {
      "command": "townhall",
      "args": ["mcp-stdio", "--town", "/path/to/your/project"]
    }
  }
}

Restart Claude Desktop. You can now ask Claude to manage your agent town!

HTTP/SSE Mode

For browser-based MCP clients:

townhall mcp-http --port 8788
# MCP endpoint: http://localhost:8788

Available Tools

MCP tools provide programmatic access to all Tinytown operations:

Read-Only Tools (town.read scope)

ToolDescription
town.get_statusGet town status including all agents
agent.listList all agents with current status
backlog.listList all tasks in the backlog

Write Tools (town.write scope)

ToolDescription
task.assignAssign a task to an agent
task.completeMark a task as completed
message.sendSend a message to an agent
backlog.addAdd a task to the backlog
backlog.claimClaim a backlog task for an agent
backlog.assign_allAssign all backlog tasks to an agent

Agent Management Tools (agent.manage scope)

ToolDescription
agent.spawnSpawn a new agent
agent.killKill (stop) an agent gracefully
agent.restartRestart a stopped agent
recovery.recover_agentsRecover orphaned agents
recovery.reclaim_tasksReclaim tasks from dead agents

Resources

MCP resources provide read-only data access:

Resource URIDescription
tinytown://town/currentCurrent town state
tinytown://agentsList of all agents
tinytown://agents/{name}Specific agent details
tinytown://backlogCurrent backlog
tinytown://tasks/{id}Specific task details

Prompts

MCP prompts provide templated interactions:

PromptDescription
conductor.startup_contextContext for conductor startup
agent.role_hintRole hints for agents

Example Conversation

With MCP configured, you can have natural conversations with Claude:

You: β€œSpawn two backend agents and assign them bug fixes”

Claude: I’ll create two backend agents and assign tasks to them. [Uses agent.spawn tool twice, then task.assign tool] Done! I’ve spawned backend-1 and backend-2 and assigned bug fix tasks to each.

You: β€œWhat’s the status of the town?”

Claude: [Uses town.get_status tool] Your town has 2 agents running:

  • backend-1: Working, 3 tasks completed
  • backend-2: Working, 2 tasks completed

Tool Response Format

All tools return JSON with consistent structure:

{
  "success": true,
  "data": {
    "agent_id": "abc123...",
    "name": "worker-1",
    "cli": "claude"
  }
}

On error:

{
  "success": false,
  "error": "Agent 'worker-99' not found"
}

Transports

TransportCommandBest For
stdiotownhall mcp-stdioClaude Desktop, IDE extensions
HTTP/SSEtownhall mcp-httpBrowser clients, remote access

stdio Transport

Used by most desktop applications. The MCP server reads JSON-RPC from stdin and writes to stdout.

HTTP/SSE Transport

Uses Server-Sent Events for server-to-client messages and HTTP POST for client-to-server messages.

# Start on custom port
townhall mcp-http --bind 0.0.0.0 --port 8788

Authentication & Authorization

Townhall supports multiple authentication modes and fine-grained authorization scopes for secure remote access.

Authentication Modes

None (Default)

No authentication required. Only safe on loopback (127.0.0.1).

[townhall.auth]
mode = "none"  # Default

⚠️ Security: Townhall will refuse to start if binding to non-loopback addresses with mode = "none".

API Key

Secure token-based authentication using Argon2id password hashing.

[townhall.auth]
mode = "api_key"
api_key_hash = "$argon2id$v=19$m=19456,t=2,p=1$..."
api_key_scopes = ["town.read", "town.write"]  # Optional: restrict scopes

Generating an API Key

# Generate a new API key (outputs raw key and hash)
tt generate-api-key

# Output:
# API Key: abc123def456...
# Hash: $argon2id$v=19$m=19456,t=2,p=1$...
#
# Add to tinytown.toml:
# [townhall.auth]
# mode = "api_key"
# api_key_hash = "$argon2id$v=19$..."

Important: Store the raw API key securely. Only the hash is stored in config.

Using API Keys

Pass the key via Authorization header or X-API-Key:

# Bearer token (recommended)
curl -H "Authorization: Bearer <your-api-key>" \
  http://localhost:8787/v1/status

# X-API-Key header
curl -H "X-API-Key: <your-api-key>" \
  http://localhost:8787/v1/status

OIDC (Coming Soon)

OpenID Connect authentication for enterprise deployments.

[townhall.auth]
mode = "oidc"
issuer = "https://issuer.example.com"
audience = "tinytown-api"
jwks_url = "https://issuer.example.com/.well-known/jwks.json"
required_scopes = ["tinytown:access"]
clock_skew_seconds = 60

Authorization Scopes

All endpoints require specific scopes for access:

ScopeDescriptionEndpoints
town.readRead status and agent infoGET /v1/*, inbox
town.writeAssign tasks, send messagesPOST /v1/tasks/*, messages, backlog
agent.manageSpawn/kill/restart agentsPOST /v1/agents, recovery
adminFull access (grants all scopes)All endpoints

Configuring Scopes

For API key auth, configure allowed scopes:

[townhall.auth]
mode = "api_key"
api_key_hash = "..."
api_key_scopes = ["town.read", "town.write"]  # Read/write but no agent management

Empty api_key_scopes grants admin access (all scopes).

Audit Logging

All mutating operations (POST, PUT, DELETE, PATCH) are logged:

INFO audit: operation completed request_id="abc123" principal="api_key" method="POST" path="/v1/agents" result="success"
WARN audit: operation denied request_id="def456" principal="api_key" method="POST" path="/v1/recover" result="denied"

Audit events include:

  • Request ID - Unique identifier for correlation
  • Principal ID - Who made the request
  • Scopes - What permissions they had
  • Method/Path - What they tried to do
  • Result - success, denied, or error

Security Notes

  • Authorization headers are never logged
  • API keys are stored as Argon2id hashes, never plaintext
  • Auth errors use constant-time responses to prevent timing attacks

TLS Configuration

For production deployments, enable TLS:

[townhall.tls]
enabled = true
cert_path = "/path/to/server.crt"
key_path = "/path/to/server.key"

Mutual TLS (mTLS)

For service-to-service authentication:

[townhall.mtls]
enabled = true
required = true  # Reject clients without valid certs
ca_path = "/path/to/ca.crt"

Security Best Practices

  1. Local development: Use mode = "none" with bind = "127.0.0.1"
  2. Team access: Use mode = "api_key" with TLS
  3. Production: Use mode = "oidc" with TLS and mTLS
  4. Always scope API keys to minimum required permissions
  5. Rotate API keys regularly

API Reference

The Tinytown Rust API for programmatic control.

Core Types

Town

#![allow(unused)]
fn main() {
use tinytown::Town;

// Initialize new town
let town = Town::init("./path", "name").await?;

// Connect to existing
let town = Town::connect("./path").await?;

// Operations
let agent = town.spawn_agent("name", "cli").await?;
let agent = town.agent("name").await?;
let agents = town.list_agents().await;
let channel = town.channel();
let config = town.config();
let root = town.root();
}

Agent

#![allow(unused)]
fn main() {
use tinytown::{Agent, AgentId, AgentType, AgentState};

// Create agent
let agent = Agent::new("name", "cli", AgentType::Worker);

// Supervisor (well-known ID)
let supervisor = Agent::supervisor("coordinator");

// Check state
if agent.state.is_terminal() { /* stopped or error */ }
if agent.state.can_accept_work() { /* idle */ }
}

AgentHandle

#![allow(unused)]
fn main() {
// Get handle from town
let handle = town.spawn_agent("worker", "claude").await?;

// Operations
let id = handle.id();
let task_id = handle.assign(task).await?;
handle.send(MessageType::StatusRequest).await?;
let len = handle.inbox_len().await?;
let state = handle.state().await?;
handle.wait().await?;
}

Task

#![allow(unused)]
fn main() {
use tinytown::{Task, TaskId, TaskState};

// Create
let task = Task::new("description");
let task = Task::new("desc").with_tags(["tag1", "tag2"]);
let task = Task::new("desc").with_parent(parent_id);

// Lifecycle
task.assign(agent_id);
task.start();
task.complete("result");
task.fail("error");

// Check state
if task.state.is_terminal() { /* completed, failed, or cancelled */ }
}

Message

#![allow(unused)]
fn main() {
use tinytown::{Message, MessageId, MessageType, Priority};

// Create
let msg = Message::new(from, to, MessageType::TaskAssign { 
    task_id: "abc".into() 
});

// With options
let msg = msg.with_priority(Priority::Urgent);
let msg = msg.with_correlation(other_msg.id);
}

MessageType

#![allow(unused)]
fn main() {
pub enum MessageType {
    // Semantic types for inter-agent communication
    Task { description: String },
    Query { question: String },
    Informational { summary: String },
    Confirmation { ack_type: ConfirmationType },

    // Task lifecycle
    TaskAssign { task_id: String },
    TaskDone { task_id: String, result: String },
    TaskFailed { task_id: String, error: String },

    // Status
    StatusRequest,
    StatusResponse { state: String, current_task: Option<String> },

    // Lifecycle
    Ping,
    Pong,
    Shutdown,

    // Extensibility
    Custom { kind: String, payload: String },
}

pub enum ConfirmationType {
    Received,
    Acknowledged,
    Thanks,
    Approved,
    Rejected { reason: String },
}
}

Helpers: msg.is_actionable(), msg.is_informational_or_confirmation()

Channel

#![allow(unused)]
fn main() {
use tinytown::Channel;
use std::time::Duration;

let channel = town.channel();

// Messages
channel.send(&msg).await?;
let msg = channel.receive(agent_id, Duration::from_secs(30)).await?;
let msg = channel.try_receive(agent_id).await?;
let len = channel.inbox_len(agent_id).await?;
channel.broadcast(&msg).await?;

// State
channel.set_agent_state(&agent).await?;
let agent = channel.get_agent_state(agent_id).await?;
channel.set_task(&task).await?;
let task = channel.get_task(task_id).await?;
}

Error Handling

#![allow(unused)]
fn main() {
use tinytown::{Error, Result};

match result {
    Ok(value) => { /* success */ }
    Err(Error::Redis(e)) => { /* redis error */ }
    Err(Error::AgentNotFound(name)) => { /* agent missing */ }
    Err(Error::TaskNotFound(id)) => { /* task missing */ }
    Err(Error::NotInitialized(path)) => { /* town not init */ }
    Err(Error::RedisNotInstalled) => { /* redis missing */ }
    Err(Error::RedisVersionTooOld(ver)) => { /* upgrade redis */ }
    Err(Error::Timeout(msg)) => { /* operation timed out */ }
    Err(e) => { /* other error */ }
}
}

Example: Complete Workflow

use tinytown::{Town, Task, AgentState, Result};
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<()> {
    // Connect
    let town = Town::connect(".").await?;
    
    // Spawn agents
    let dev = town.spawn_agent("dev", "claude").await?;
    let reviewer = town.spawn_agent("reviewer", "codex").await?;
    
    // Assign work
    dev.assign(Task::new("Build the feature")).await?;
    
    // Wait for completion
    loop {
        if let Some(agent) = dev.state().await? {
            if matches!(agent.state, AgentState::Idle) {
                break;
            }
        }
        tokio::time::sleep(Duration::from_secs(5)).await;
    }
    
    // Send for review
    reviewer.assign(Task::new("Review the feature")).await?;
    
    Ok(())
}