See what your AI agents are actually doing.
Independent analysis. Visual tracking. Structured synthesis. Local-first.
_y is a visual orchestration layer for AI agents. Most multi-agent frameworks run in a terminal — logs scroll, JSON outputs, you have no idea what's happening between agents.
_y makes agent work visible. See who's analyzing what, how reports flow between functions, where agents disagree, and which LLM provider catches what the others miss. Configure agents for your business functions — marketing, engineering, risk, finance — and watch them work independently before a synthesis agent combines everything.
Run locally with Ollama (free), or use cloud LLMs (OpenAI, Anthropic, Google). Mix different providers per function for broader coverage.
Connect your business URL and _y's agents go to work:
Each department analyzes your business independently:
┌─────────────────────────────────────────────────────────┐
│ REPORT: Market Positioning Analysis │
│ Agent: Searchy (5F Marketing) │
│ Model: gemini-2.0-flash │
├─────────────────────────────────────────────────────────┤
│ │
│ Finding: Target site ranks #47 for primary keyword │
│ "AI automation" — competitors hold positions #3-#12. │
│ │
│ Recommendation: Focus on long-tail keywords │
│ "AI company builder" and "local LLM agents" where │
│ competition is 10x lower. │
│ │
│ Risk: Skepty (8F Risk) flags keyword cannibalization │
│ between blog and product pages. │
│ │
│ Status: PENDING REVIEW → Chairman Dashboard │
└─────────────────────────────────────────────────────────┘
Reports flow through a structured chain — not a chatbot:
URL Input → Agent Analysis (independent, parallel)
→ Cross-Department Review
→ Skepty Challenge (independent oversight)
→ Counsely Synthesis (Chief of Staff)
→ Chairman Decision (you)
| Agent | Department | Example Output |
|---|---|---|
| Searchy | Marketing | SEO audit, competitor keyword gaps |
| Buildy | Engineering | Tech stack analysis, performance bottlenecks |
| Finy | Capital | Revenue model assessment, unit economics |
| Skepty | Risk | Flags blind spots in other agents' reports |
| Buzzy | Content | Content strategy, social media positioning |
| Counsely | Chairman Office | Synthesizes all reports into executive brief |
Key: No agent sees another's analysis until review phase. This prevents groupthink — the Byzantine Principle in practice.
# 1. Clone
git clone https://github.com/antryu2b/_y.git
cd _y
# 2. Install
npm install
# 3. Setup (auto-detects your hardware, recommends models)
npm run setup
# 4. Start
npm run dev
# 5. Start the chat worker (in another terminal)
npm run chat-workerOpen http://localhost:3000 and connect your company.
Once you see the dashboard at localhost:3000:
Type any company website URL into the input field. The agents will analyze it.
Each agent independently analyzes the URL from their department's perspective:
- Searchy checks SEO and search positioning
- Buildy audits the tech stack
- Finy evaluates the business model
- Skepty challenges what others might miss
Reports appear in the Reports panel. Each department submits independently — no agent sees another's work until synthesis.
Counsely (Chief of Staff) combines all department reports into one executive brief with recommendations.
Items flow to the Decision Pipeline where you approve, reject, or modify recommendations.
You enter: https://example-startup.com
→ Searchy: "SEO score 34/100, missing meta descriptions on 12 pages"
→ Buildy: "React 18, no SSR, 4.2s load time on mobile"
→ Finy: "Freemium model, estimated 2.3% conversion rate"
→ Skepty: "Buildy missed: third-party scripts blocking render"
→ Counsely: "Priority: fix mobile performance (affects 68% of traffic)"
→ You: Approve / Modify / Reject
Pro tip: Try your own company's URL first. Then try a competitor's.
The setup wizard detects your RAM/GPU and recommends the optimal model profile:
| Profile | RAM | Models | Download |
|---|---|---|---|
| SMALL | 8GB | qwen2.5:7b | ~4GB |
| MEDIUM | 16GB | qwen3:14b + gemma3:12b | ~20GB |
| LARGE | 32GB | qwen3:32b + gemma3:27b | ~55GB |
| X-LARGE | 64GB+ | + llama3.3:70b | ~97GB |
The setup automatically pulls Ollama models and generates a llm-profile.json for optimal agent-model matching.
Choose your AI backend during setup:
| Provider | Models | Cost | Requirements |
|---|---|---|---|
| Ollama (default) | Qwen3, Gemma3, Llama3, ExaOne | Free | 8GB+ RAM, Ollama installed |
| OpenAI | GPT-4o, GPT-4o-mini | Pay per token | API key |
| Anthropic | Claude Sonnet, Claude Opus | Pay per token | API key |
| Gemini Flash, Gemini Pro | Free tier available | API key | |
| Mixed ⭐ | Any combination above | Varies | Multiple keys |
Mixed mode is where _y shines — assign different providers to different departments:
# .env
LLM_PROVIDER=mixed
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...// llm-profile.json (auto-generated by setup)
{
"provider": "mixed",
"agents": {
"counsely": { "provider": "anthropic", "model": "claude-sonnet-4-20250514" },
"skepty": { "provider": "openai", "model": "gpt-4o" },
"searchy": { "provider": "google", "model": "gemini-2.0-flash" },
"buildy": { "provider": "ollama", "model": "qwen3:32b" }
}
}Byzantine Principle in action: analysis (Gemini) → challenge (GPT-4o) → synthesis (Claude). Different companies, different architectures, different blind spots.
_y supports three database backends:
Zero configuration. Data stored locally in data/y-company.db.
# No setup needed — tables auto-created on first runFor production deployments with multiple users.
# Set in .env:
DB_PROVIDER=postgres
DATABASE_URL=postgresql://user:password@localhost:5432/y_company
# Create tables:
psql $DATABASE_URL < sql/postgres-schema.sqlCloud PostgreSQL with authentication and realtime features.
# Set in .env:
DB_PROVIDER=supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_KEY=your-service-role-key
# Create tables in Supabase SQL Editor:
# Copy contents of sql/postgres-schema.sqldata/y-company.db
| Table | Purpose |
|---|---|
conversations |
Agent chat history |
reports |
Agent-generated reports |
decisions |
Decision pipeline items |
agent_memory |
Agent knowledge & memories |
chat_queue |
LLM request queue (worker processes these) |
meetings |
War Room meeting records |
schedules |
Scheduled operations |
directives |
Chairman directives |
trades |
Trading records |
connected_companies |
Connected subsidiary companies |
cp data/y-company.db data/y-company-backup.dbrm data/y-company.db
# Tables will be re-created on next start"No Consensus, Just Counsel"
Most multi-agent frameworks let AI agents vote and reach consensus. We don't. Based on research into the Byzantine Generals Problem in LLM agents, democratic consensus among AI agents is structurally unreliable.
Instead: Each agent analyzes independently → They discuss in meetings → Counsely synthesizes → You decide. The human stays in the loop.
30 agents, zero consensus — by design.
- 🏢 10-Floor Virtual Tower — Each floor houses a different department
- 🔍 Company Analysis — Connect your URL, agents analyze your business
- ⚔️ War Room — Run meetings with your AI team on any topic
- 📊 Chairman Dashboard — Issue directives, approve decisions, view reports
- 📡 Directive Pipeline — Chairman → Counsely assigns agents → execution → report
- 🌍 Bilingual — English / Korean toggle
┌─────────────────┐ ┌─────────────────┐
│ Next.js App │ │ Chat Worker │
│ (Frontend + │ │ (Background │
│ API Routes) │ │ LLM Caller) │
└────────┬────────┘ └────────┬────────┘
│ │
└────────┬───────────┘
│
┌──────┴──────┐ ┌─────────────────┐
│ SQLite DB │ │ LLM Provider │
│ (Local) │ │ Ollama/OpenAI/ │
└─────────────┘ │ Anthropic/ │
│ Google │
└─────────────────┘
Local or cloud. Your choice.
Unlike flat agent frameworks, _y agents have ranks and reporting lines — just like a real company:
Chairman (You)
├── Counsely — Chief of Staff (synthesis, final briefing)
├── Tasky — Planning Director
│ ├── Finy (Finance) · Legaly (Legal)
│ ├── Buzzy → Wordy, Edity, Searchy (Content)
│ ├── Growthy → Logoy, Helpy, Clicky, Selly (Marketing)
│ ├── Stacky → Watchy, Guardy (ICT)
│ ├── Hiry → Evaly (HR)
│ ├── Buildy → Pixely, Testy (Engineering)
│ └── Opsy (Operations)
├── Skepty — Risk Director (independent challenge)
├── Audity — Audit (independent review)
└── Quanty — Capital Director
└── Tradey, Globy, Fieldy, Hedgy, Valuey
Key design: Risk (Skepty) and Audit (Audity) report directly to the Chairman — never to the departments they review. Just like a real corporate governance structure.
| Floor | Department | Agents | Default LLM |
|---|---|---|---|
| 10F | Chairman's Office | Counsely (Chief of Staff) | Largest available |
| 9F | Planning | Tasky, Finy, Legaly | Strategy model |
| 8F | Risk & Audit | Skepty, Audity | Analysis model |
| 7F | Engineering | Pixely, Buildy, Testy | Dev model |
| 6F | Content | Buzzy, Wordy, Edity, Searchy | Research model |
| 5F | Marketing | Growthy, Logoy, Helpy, Clicky, Selly | Research model |
| 4F | ICT | Stacky, Watchy, Guardy | Dev model |
| 3F | HR | Hiry, Evaly | Analysis model |
| 2F | Capital | Quanty, Tradey, Globy, Fieldy, Hedgy, Valuey | Analysis model |
| 1F | Operations | Opsy | Dev model |
Byzantine Principle: Agents on the same issue with analysis ↔ check ↔ synthesis roles use different LLM architectures to avoid correlated failures.
// src/data/agent-config.ts
{
id: \'datay\',
number: \'31\',
name: \'Datay\',
tier: \'Manager\',
floor: 4,
department: \'ICT Division\',
reportTo: \'stacky\',
llm: LLM_MODELS.qwen3_32b,
role: \'analyst\',
desc: \'Data Engineering\',
emoji: \'\'
}
// Add photo: public/agents/datay.png
// Add persona: src/data/personas.tsThe core execution engine:
- Chairman issues directive (Dashboard)
- Counsely assigns agents (Ollama AI selection)
- Agents execute (chat_queue → worker → Ollama)
- Report generated (auto-compiled from all responses)
POST /api/directives → Create directive
POST /api/directive/assign → AI agent assignment
POST /api/directive/execute → Queue agent tasks
POST /api/directive/complete → Compile report
GET /api/directive/status → Poll progress
- Frontend: Next.js 16, React 19, Tailwind CSS, shadcn/ui
- Database: SQLite (better-sqlite3) — zero config
- AI: Ollama (local) / OpenAI / Anthropic / Google — your choice
- Icons: Lucide React
- Node.js 18+
- Option A (Local): Ollama installed + 8GB+ RAM
- Option B (Cloud): API key from OpenAI, Anthropic, or Google
# Run all tests
npm test
# Watch mode
npm run test:watch
# Coverage report
npm run test:coverage| Category | Path | What it covers |
|---|---|---|
| Validation | __tests__/validation/ |
No hardcoded keys, required files, dead imports |
| Unit | __tests__/unit/ |
Personas, LLM config, company registry |
| Integration | __tests__/integration/ |
Directive flow, API routes, data consistency |
| Data | __tests__/data/ |
Agent roster (30), floors (10), company config |
MIT — see LICENSE
Visual AI orchestration lets you see what multiple AI agents are doing in real-time — who's analyzing what, where they disagree, and how their work combines into a final output. _y provides this as an open-source dashboard for multi-agent workflows.
CrewAI and AutoGen are agent frameworks — you wire agents together in code. _y is a visual orchestration layer: agents are organized by business function, work independently without seeing each other's output, and you watch the entire flow from analysis to synthesis in a UI. Different LLM providers per function means different blind spots get covered.
Yes. _y works with Ollama on laptops with 8GB RAM or more. The setup wizard detects your hardware and picks appropriate models automatically. No API keys needed for local mode.
Ollama (Qwen, Gemma, Llama), OpenAI (GPT-4o), Anthropic (Claude), and Google (Gemini). You can mix different providers per function — for example, risk analysis on GPT-4o while operations run on local Qwen.
LLM-based agents anchor to whoever speaks first, and similar training data produces similar blind spots. Independent analysis with structured synthesis gives broader coverage and lets you spot where individual models fail.
Business analysis (SEO, risk, market positioning), strategic planning, content strategy, technical audits. Connect your business URL and agents produce independent reports that get synthesized into an executive brief.
visual ai orchestration · multi-agent framework · ai agent dashboard · local llm orchestration · ollama multi-agent · ai workflow visualization · independent agent analysis · multi-llm routing · agent observability · ai company builder · crewai alternative · autogen alternative · local-first ai · agent hierarchy
Visual AI agent orchestration — independent analysis, structured synthesis, local-first. Built for builders who want to see what their agents are doing.
