Hanzo Dev is a fast, native coding agent for your terminal. Built in Rust on top of openai/codex, it adds a full-featured chat TUI, multi-agent orchestration, browser automation, theming, and CLI agent piping — while syncing upstream improvements automatically.
- Multi-agent orchestration —
/plan,/solve,/codecoordinate Claude, Gemini, Qwen, and GPT simultaneously. Race for speed or reach consensus across models. - Auto Drive — Hand off complex tasks; the agent coordinates sub-agents, approvals, and recovery autonomously.
- Auto Review — Background ghost-commit watcher reviews code in a separate worktree without blocking your flow.
- CLI agent piping — Spawn and orchestrate
claude,gemini,qwen, and custom agents as sub-processes. Works alongside Claude Code, Gemini CLI, and Qwen Code. - Rich chat TUI — Zen mode, 20+ themes, streaming markdown, syntax highlighting, session management, and card-based activity history.
- Browser integration — CDP support, headless browsing, screenshots captured inline.
- MCP support — 260+ tools via Model Context Protocol. Extend with filesystem, databases, APIs, or custom servers.
- Skills system — Dynamic tool injection with live reload. Define custom skills in
.agents/skills/. - Upstream sync — Automated 30-minute polling merges upstream improvements while preserving Hanzo features.
Full changelog | Release notes
# Via npm (installs native Rust binary)
npm install -g @hanzo/dev
dev
# Or run directly
npx -y @hanzo/devThe binary is named dev. The npm package also installs coder as an alias.
- ChatGPT sign-in (Plus/Pro/Team) — run
devand pick "Sign in with ChatGPT" - API key —
export OPENAI_API_KEY=sk-... && dev - Device code — for headless environments,
devprompts a device code flow automatically
git clone https://github.com/hanzoai/dev.git
cd dev
./build-fast.sh # ~20 min cold, ~2 min incremental
./hanzo-dev/target/dev-fast/dev
Hanzo Dev ships a full terminal UI built with Ratatui. It's not just a prompt — it's a workspace.
The TUI has three zones: a scrollable history pane (streamed markdown, code blocks, tool calls, exec output), a composer at the bottom for input, and an optional status line showing model, session, and agent state.
| Feature | How |
|---|---|
| Zen mode | Minimal chrome, flush-left borders, animated spinner. Default on. Toggle: Alt+G |
| Themes | 20+ presets (light/dark). /themes to browse and preview live |
| Streaming markdown | Syntax-highlighted code blocks, inline images, reasoning traces |
| Card-based history | Exec output, tool calls, diffs, browser screenshots — each in styled cards |
| Agent terminal | Ctrl+A opens a split view of all running sub-agents with live output |
| Session management | /resume to pick up where you left off, /fork to clone a session |
| Session nicknames | /nick <name> to label sessions for easier identification |
| External editor | Ctrl+G opens your $EDITOR for long prompts |
| Plan mode | Streamed plan items with step-by-step approval |
| Undo timeline | Esc Esc opens undo history to roll back changes |
| GH Actions viewer | Live progress tracking for GitHub Actions runs |
| Status line | /statusline to configure what's shown |
| Personality | /personality to set the agent's communication style |
| Key | Action |
|---|---|
Enter |
Send message |
Ctrl+C |
Cancel current operation |
Esc |
Context-dependent: close overlay, pause Auto Drive, clear composer |
Esc Esc |
Open undo timeline |
Ctrl+A |
Toggle agent terminal overlay |
Ctrl+G |
Open external editor |
Alt+G |
Toggle Zen mode |
Ctrl+L |
Clear screen |
Up/Down |
Scroll history, navigate overlays |
Hanzo Dev can orchestrate multiple CLI agents simultaneously. Each command spawns agents in isolated git worktrees.
All configured agents (Claude, Gemini, GPT) review the task and produce a consolidated plan.
/plan "Migrate the auth system from sessions to JWT"
Agents race to solve the problem. Fastest correct answer wins. Based on arxiv.org/abs/2505.17813.
/solve "Why does deleting one user cascade-drop the entire users table?"
Multiple agents implement the solution, then the best result is selected.
/code "Add dark mode support with system preference detection"
Hand off a multi-step task. Auto Drive coordinates agents, manages approvals, and self-heals on failure.
/auto "Refactor the auth flow, add device login, and write tests"
/auto status
Hanzo Dev spawns external CLI agents as sub-processes and streams their output back into the TUI. This lets you use dev as an orchestration layer on top of other AI coding tools.
| Agent | CLI | Install |
|---|---|---|
| Claude Code | claude |
npm install -g @anthropic-ai/claude-code |
| Gemini CLI | gemini |
npm install -g @google/gemini-cli |
| Qwen Code | qwen |
npm install -g @qwen-code/qwen-code |
| Custom | any executable | define in ~/.hanzo/agents/ |
- Multi-agent commands (
/plan,/solve,/code) detect installed CLIs on yourPATH - Each agent runs in its own git worktree with the task prompt
- Output streams into the TUI's agent terminal (
Ctrl+Ato view) - Results are collected, compared, and the best outcome is applied
Create YAML-frontmatter markdown files in ~/.hanzo/agents/:
---
name: my-reviewer
model: claude
args: ["--model", "claude-sonnet-4-5-20250929"]
---
You are a code reviewer. Review the provided code for bugs, security issues,
and style problems. Be concise and actionable.Use with /use my-reviewer or reference in multi-agent orchestration.
/browser # Open built-in headless browser
/browser <url> # Open URL in built-in browser
/chrome # Connect to external browser via CDP (auto-detect)
/chrome 9222 # Connect to specific CDP portTip: Install Enso for the best browser integration with Hanzo Dev.
/new # Start fresh conversation
/resume # Pick up a previous session (sortable picker)
/fork # Clone current session
/nick <name> # Label this session
/status # Show session info/themes # Browse and preview themes
/settings # Full settings overlay
/model # Switch model or provider
/reasoning low|medium|high
/statusline # Configure status line
/personality # Set agent communication style
/permissions # Configure approval policies/auto "task" # Start autonomous multi-step task
/auto status # Check Auto Drive progress/use <agent> # Run a custom agent
/plan "task" # Multi-agent consensus planning
/solve "task" # Multi-agent racing
/code "task" # Multi-agent consensus coding
dev [options] [prompt]
Options:
--model <name> Override model (e.g. gpt-5.1, claude-opus-4-6)
--read-only Prevent file modifications
--no-approval Skip approval prompts
--config <key=val> Override config values
--oss Use local open-source models (Ollama)
--sandbox <mode> Sandbox level (read-only, workspace-write)
--url <endpoint> Connect to custom API endpoint
--debug Log API requests/responses to file
--help Show help
--version Show version--model changes the model name sent to the active provider. To switch providers, set model_provider in config. Any OpenAI-compatible API works (Chat Completions or Responses).
Config file: ~/.hanzo/config.toml
Note
Hanzo Dev reads from ~/.hanzo/ (primary), ~/.code/, and ~/.codex/ for backwards compatibility. It only writes to ~/.hanzo/.
# Model settings
model = "gpt-5.3-codex"
model_provider = "openai"
# Behavior
approval_policy = "on-request" # untrusted | on-failure | on-request | never
model_reasoning_effort = "medium" # low | medium | high
model_reasoning_summary = "detailed"
sandbox_mode = "workspace-write"
# TUI preferences
[tui]
alternate_screen = true
notifications = true
auto_review_enabled = true
[tui.theme]
name = "dark-zen"
zen = true
[tui.spinner]
name = "dots"
# Model profiles
[profiles.claude]
model = "claude-opus-4-6"
model_provider = "anthropic"
model_reasoning_effort = "high"
[profiles.fast]
model = "gpt-5.1"
model_provider = "openai"
approval_policy = "never"
# MCP servers
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]| Variable | Purpose |
|---|---|
HANZO_HOME |
Override config directory (default: ~/.hanzo) |
OPENAI_API_KEY |
OpenAI API key |
ANTHROPIC_API_KEY |
Anthropic API key (for Claude agent) |
OPENAI_BASE_URL |
Custom OpenAI-compatible endpoint |
OPENAI_WIRE_API |
Force chat or responses wiring |
Legacy CODE_HOME and CODEX_HOME are still recognized.
Hanzo Dev reads project context from markdown files:
AGENTS.mdorCLAUDE.mdin your project root — loaded automatically at session start- Session memory — conversation history persists across
/resume - Codebase analysis — automatic project structure understanding
- Skills — custom tools in
.agents/skills/with live reload
# Run a task without approval prompts
dev --no-approval "run tests and fix any failures"
# Read-only analysis
dev --read-only "analyze code quality and generate report"
# With config overrides
dev --config output_format=json "list all TODO comments"
Hanzo Dev supports the full MCP specification for tool extensibility:
- Built-in tools — file operations, shell exec, browser, apply-patch
- External servers — filesystem, databases, APIs, custom tools
- Hot reload — MCP servers reload without restarting the TUI
- OAuth scopes — MCP server auth with configurable scopes
Configure servers in ~/.hanzo/config.toml under [mcp_servers.<name>]:
[mcp_servers.memory]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-memory"]
[mcp_servers.postgres]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
Auto Review runs in the background during coding sessions:
- Watches for code changes after each turn
- Creates ghost commits in a separate worktree
- Reviews changes using a fast model (configurable)
- Reports issues and suggests fixes without blocking your flow
- Runs parallel with Auto Drive tasks
Configure in settings or config.toml:
[tui]
auto_review_enabled = true
review_auto_resolve = false # true to auto-apply suggested fixes
| Feature | OpenAI Codex | Hanzo Dev |
|---|---|---|
| Multi-agent | Single model | /plan, /solve, /code with Claude + Gemini + GPT |
| Agent piping | None | Spawn claude, gemini, qwen as sub-processes |
| Custom agents | None | YAML frontmatter loader, /use command |
| Theme system | Basic | 20+ themes, Zen mode, live preview |
| Browser | None | CDP + internal headless, screenshots inline |
| Auto Review | None | Ghost-commit watcher with auto-resolve |
| Skills | Static | Dynamic injection, live reload |
| Sandbox | Seatbelt | + Bubblewrap (Linux), proxy-aware routing |
| Session mgmt | Basic | Nicknames, forking, sortable resume picker |
| Plan mode | None | Streamed plan items with step approval |
| Upstream sync | N/A | Automated 30-min merge with policy-driven conflict resolution |
Hanzo Dev stays compatible with upstream. The automated merge workflow polls openai/codex every 30 minutes and applies changes using a policy file that protects fork-specific code while adopting upstream improvements.
Hanzo Dev is a Rust workspace with 39+ crates:
hanzo-dev/
cli/ # Binary entry point (produces `dev`)
tui/ # Terminal UI (Ratatui, 1.6MB of widget code)
core/ # Config, auth, exec, agents, MCP, git
protocol/ # Streaming protocol definitions
exec/ # Command execution + sandboxing
browser/ # CDP browser automation
mcp-client/ # MCP client implementation
mcp-server/ # MCP server
code-auto-drive-core/ # Auto Drive orchestration
cloud-tasks/ # Cloud task management
login/ # Auth (ChatGPT, API key, device code)
... # 28 more supporting crates
./build-fast.sh # Full build (required check)
cargo nextest run --no-fail-fast # All workspace tests
cargo test -p hanzo-tui --features test-helpers # TUI tests
./pre-release.sh # Pre-push validation
How is this different from OpenAI Codex CLI?
Hanzo Dev adds multi-agent orchestration, CLI agent piping (Claude/Gemini/Qwen), browser automation, a full theme engine, Auto Review, and skills — while auto-syncing upstream improvements.
Can I use it with Claude Code?
Yes. Install
@anthropic-ai/claude-codeand Hanzo Dev will discover it automatically. Use/plan,/solve, or/codeto include Claude in multi-agent workflows, or/usewith a custom agent definition.
Can I use my existing Codex/Code configuration?
Yes. Hanzo Dev reads from
~/.hanzo/(primary),~/.code/, and~/.codex/. It only writes to~/.hanzo/.
Does this work with ChatGPT Plus?
Yes. Same "Sign in with ChatGPT" flow as upstream Codex.
Can I use local models?
Yes.
dev --ossconnects to Ollama. Setmodel_providerandOPENAI_BASE_URLin config for any OpenAI-compatible endpoint.
Is my data secure?
Auth stays on your machine. We don't proxy credentials or conversations.
git clone https://github.com/hanzoai/dev.git
cd dev
./build-fast.sh
./hanzo-dev/target/dev-fast/devgit config core.hooksPath .githooksThe pre-push hook runs ./pre-release.sh when pushing to main.
- Fork and create a feature branch
- Make changes
cargo nextest run --no-fail-fast— tests pass./build-fast.sh— zero errors, zero warnings- Submit PR
Apache 2.0 — see LICENSE. Community fork of openai/codex. Upstream LICENSE and NOTICE files preserved.
Hanzo Dev is not affiliated with, sponsored by, or endorsed by OpenAI.
Using OpenAI, Anthropic, or Google services through Hanzo Dev means you agree to their respective Terms. Don't scrape, bypass rate limits, or share accounts.
Auth lives at ~/.hanzo/auth.json. Inputs/outputs sent to AI providers are handled under their Privacy Policies.