AI & Automation
2024 — Present Featured

Synapse Studio

Cadences Agent Orchestration Engine

The multi-agent orchestration system at the core of Cadences. Synapse processes all incoming data sources (webhooks, forms, scrapers, email, Cadences Bridge), routes them through specialized AI agents, and dispatches results to configured outputs — including triggering no-code workflows built in Cadences. Everything runs server-side on Cloudflare Workers; the animated 2D building interface exists as a debugging and observation tool to verify that agents, routing, and execution plans work correctly.

Year

2024 — Present

Role

Full-Stack Developer

Tech Stack

9 technologies

The Challenge

Cadences needed an intelligent layer between data and action — not just a chatbot, but a system that autonomously processes inputs, reasons across specializations, and produces structured outputs.

  • What is an AI agent? An autonomous entity with its own identity, expertise, personality, and tools — not a generic chatbot but a specialist that can search the web, analyze images, generate visuals, or query databases depending on its capabilities
  • What is an orchestrator? The brain that receives a complex task, evaluates the available team of agents, and creates an execution plan — deciding which agents participate, in what order, and which steps can run in parallel. Like a project manager who assigns work based on each team member's skills and availability
  • The challenge: multi-agent pipelines are inherently opaque. When something fails or produces unexpected results, there's no way to see which agent did what, in what order, and where the reasoning went wrong

The Approach

Build Synapse as Cadences' orchestration backbone — connecting inputs, agents, and outputs — with a visual debugging layer:

  • Input sources — webhooks, public forms, email inboxes, scrapers, and the Cadences Bridge feed tasks into Synapse automatically. Each source maps to agents and departments
  • Output destinations — results route to DATA_TABLE rows, Cadences tasks, webhook endpoints, notifications, or trigger no-code workflows configured in the Cadences workflow engine
  • 3-level agent hierarchy — L1 Executors (Llama 8B, fast micro-tasks), L2 Specialists (DeepSeek V3, complex analysis), L3 Directors (DeepSeek/Gemini, strategic review and compilation)
  • Observable debugging — the 2D animated building is not the product, it's the debugger. Agents moving between floors, taking elevators, entering meeting rooms — every animation maps to a real backend event, making it possible to visually trace and verify the system's behavior

The Solution

The orchestrator receives a task, evaluates ALL active agents, and returns an execution plan with parallel groups:

  • Orchestration flow — LLM call (DeepSeek V3, configurable) with full team roster → returns JSON plan with agent assignments, step types (initial_analysis → specialist_input → cross_dept_consultation → director_review → compilation), required capabilities, and parallel groups
  • Parallel execution — steps in the same group run via Promise.allSettled (fan-out), groups execute sequentially (fan-in). Marketing + Design research in parallel → Director review after both complete
  • 7 multimodal capabilities — LLM text, web search (Groq compound-beta), vision/ITT (Llama 4 Scout), image gen/TTI (FLUX Schnell), image transform/I2I (SD 1.5), STT (Whisper), TTS (ElevenLabs)
  • CEO Virtual — separate LLM reviews subtask proposals before creating child tasks (max 5 per task, max depth 2), with agent assignment optimization
  • SynapseBot Architect — AI wizard with 15 tools (create_building, create_agent, create_source...) that conversationally builds an entire agency in 6 phases
  • CommandBot CEO — natural language operations (9 tools: create_task, call_meeting, send_to_break, agent_report...) with STT/TTS support

Key Results

  • 26K+ lines — 18 JS modules, 5 CSS files, 71+ API handlers, 19 D1 tables
  • AI orchestrator creates parallel execution plans across 1-8 agents per task
  • 3-tier intelligence: L1 Executor (8B) → L2 Specialist (V3) → L3 Director (Gemini)
  • 7 multimodal capabilities: LLM, web, vision, TTI, I2I, STT, TTS
  • SynapseBot (15 tools, 6-phase wizard) + CommandBot CEO (9 tools, STT/TTS)
  • Animated 2D building with elevator queue, cross-floor movement, energy/fatigue system
  • XP, leaderboard, achievements with unlock animations
  • 4 AI providers configurable per tier, per bot, per capability, per org

Tech Stack

Vanilla JS CSS Cloudflare D1 R2 Workers AI DeepSeek Groq Gemini FLUX
$ cat project.json
{
"name": "Synapse Studio",
"status": "production",
"stack": [9],
"results": [8]
}