Browser-based workbench for the ACE (CRISPR-Connect) initiative.
ace-web is the web companion to the ace Claude Code plugin. It
gives a Dimagi team or third-party LLO a place to:
- See every ACE opportunity in their workspace, with per-skill artifacts, judge verdicts, gates, and run-level scorecards (the Workbench).
- Talk to Claude in a multi-player chat that's wired to the same context
(multi-player drafts, persistent transcripts, transcript ingest from
local
.jsonlfiles). - Onboard a new workspace by pointing at a Google Drive folder — no CLI required for the day-to-day inspection loop.
Drive is the source of truth: opps live as files under ACE/<opp-slug>/
in a workspace's Drive folder; ace-web reads through to them via a shared
service account.
- Initial development complete (2026-04-21). Phases 1-4 shipped. Phase 5 ("Polish": observability, evals, a11y, security review) was reviewed and deferred indefinitely — revisit only when a concrete pain point shows up.
- Multi-tenant Workspaces shipped (2026-04-27). The hard-coded
ACE_DRIVE_ROOT_FOLDER_IDis now migration-only; each workspace owns its own Drive folder + member list (Owner / Editor / Viewer), with self-onboarding at/welcomeand invite-by-email at/invite/<token>.
For a phase-by-phase status table and the canonical map of where things
live, see CLAUDE.md. For the whole-product vision and
the engineering execution plan it phases into, see
docs/specs/2026-04-08-ace-web-design.md.
- Design spec (the whole vision):
docs/specs/2026-04-08-ace-web-design.md - Implementation plans (per phase):
docs/plans/ - Learnings (load-bearing gotchas — read before touching the relevant
area):
docs/learnings/ - Architecture notes:
docs/architecture/ - Deploy runbook:
docs/deploy.md - Agent context (what every Claude session reads first):
CLAUDE.md
The broader ACE plugin (CRISPR-Connect orchestration) lives in the
sibling ace repo at ../ace/. ace-web is a separate module — its
design spec lives here, not there. This repo is consumed as a git
submodule from ace, but day-to-day work happens in this repo
directly to avoid submodule pointer churn.
docker compose upThen open http://localhost:8001/ace/. Backend code edits under apps/
and config/ reload automatically (uvicorn --reload is wired into the
dev compose command). Frontend changes require a rebuild — see below
for the fast path.
For UI work, run the Vite dev server alongside docker compose so HMR gives you sub-second reloads:
# terminal 1 — backend + db + redis
docker compose up
# terminal 2 — Vite dev server with hot module replacement
cd frontend && bun run dev # or `npm run dev`Then open http://localhost:5173/ace/. The Vite server proxies
/ace/api, /ace/auth, /ace/admin, /ace/share, and /ace/ws to
the docker-compose Django on :8001 so cookies, auth, and WebSockets
all work end-to-end. The bundled-into-Django path at :8001 stays
available for prod-parity smoke checks.
Port 8001 (not the usual Django default of 8000) so ace-web coexists
with CommCare HQ / connect-labs running locally on 8000. Override the
host port with ACE_WEB_HOST_PORT=<port> in .env and tell Vite about
it via VITE_BACKEND_PORT=<port> bun run dev.
The container reads the plugin from /app/vendor/ace, which is a
snapshot baked into the image at build time. To pick up plugin edits
(skill renames, manifest path moves, agent frontmatter changes) without
a rebuild, copy the tracked override template:
cp docker-compose.override.yml.example docker-compose.override.yml
# edit docker-compose.override.yml — point the volume at your host
# plugin checkout (replace /Users/CHANGEME/...)
docker compose upCompose auto-merges docker-compose.override.yml (which is gitignored).
The override bind-mounts your host plugin path over /app/vendor/ace
and adds it to uvicorn's --reload-dir, so a save in the plugin repo
triggers a uvicorn restart and the next request reads the fresh
manifest / skill registry.
This is local-only by design — production keeps its baked, immutable
plugin snapshot, and build-backend.yml clones the latest plugin SHA
on every backend image build so deploys always pick up main.
The dev container ships with two escape hatches enabled
(ACE_ALLOW_TEST_LOGIN=True, ACE_USE_FAKE_CLI_BACKEND=True — both
set automatically by config/settings/development.py):
- On the sign-in page, use the "Sign in as test user" form at the bottom — type any email, get logged in. No CommCare Connect OAuth credentials required.
- Land on
/welcomeand create a workspace. You'll need a Google Drive folder shared with the configured service account if you want Drive features to work; otherwise opps will be empty. - Try chat — it'll respond with deterministic test text via the
FakeCLIBackenduntil you wire up real claude CLI credentials (seedocs/architecture/cli-credentials.md).
That's enough to click around and understand the surface area. To use
ACE for real, configure CommCare Connect OAuth (CONNECT_OAUTH_CLIENT_ID
CONNECT_OAUTH_CLIENT_SECRETin.env), point a workspace at a real Drive folder shared with the service account, and upload claude CLI credentials via/ace-web:create-cli-credentials.
- Backend: Django 5 + Channels 4 + DRF, ASGI via uvicorn
- Frontend: React 19 + Vite + TypeScript + Tailwind + shadcn/ui
- Data: PostgreSQL (AWS RDS in prod, local Postgres via docker compose)
- Realtime: WebSocket-only (
SessionConsumer), channels-redis backed by ElastiCache in prod - Drive access: shared Google service account, key in AWS Secrets
Manager (
ACE_DRIVE_SA_KEY_JSON) - Claude: local Claude CLI subprocess (
apps/common/CLIBackend), subscription credential blob inSystemConfig
ace-web runs on AWS ECS Fargate as a tenant of the connect-labs shared
infrastructure (cluster labs-jj-cluster, ALB path prefix /ace/*).
Manual deploy:
gh workflow run deploy-ace-web-labs.yml --ref main -f run_migrations=trueSee docs/deploy.md for the full runbook (image
build, secrets, rollbacks, first-time setup).
pytest -v # backend
bunx tsc -b # frontend type check
ruff check . # lint