Back to Blog
Use Cases December 15, 2025 · 11 min read

No-Code Workflows That Actually Work in Production — 7,000 Lines of Execution Engine

GM

Gonzalo Monzón

Founder & Lead Architect

We've all seen the Zapier demos. They look great in a 3-minute video. But when you need error recovery, conditional branching based on AI analysis, parallel execution, rate limiting, and retry logic — most no-code tools leave you writing code anyway. We decided to build a workflow engine that handles real business complexity, not toy examples. The result: 7,073 lines of execution engine, 5,086 lines of visual editor, and 20+ node types running in production across healthcare, travel, and real estate.

The Production Gap Nobody Talks About

Most no-code workflow tools are designed for simple triggers: "When a form is submitted, send an email." That's fine for basic automation. But real business processes look more like:

  • Lead comes in via webhook → AI scores urgency and intent → if score > 70, create CRM entry AND send personalized WhatsApp → wait 24h → if no reply, AI generates follow-up based on lead context → sends follow-up → if reply mentions "price", generate custom quote with AI → send quote as PDF → schedule call with sales team in 48h

Try building that in Zapier, Make, or n8n. You'll hit limitations within 5 minutes — and when something fails mid-flow, good luck debugging which step broke and why.

The tools that can do it (Temporal, Prefect, Airflow) require engineering teams to set up and maintain. There's a massive gap between "drag-and-drop for simple stuff" and "write code for everything complex." We built in that gap.

The Visual Editor: Canvas API at 60fps

The workflow editor is a full-featured visual canvas built with React + Canvas API — not DOM-based, which means it handles hundreds of nodes without lag:

  • Drag-and-drop: Nodes drag from a categorized palette onto an infinite canvas
  • Curved connections: Bézier curves between output and input ports with hover highlighting
  • Zoom/Pan: Smooth 60fps at any zoom level, tested with 500+ nodes
  • Multi-select: Rectangular selection to grab groups of nodes
  • Minimap: Bird's-eye view of the entire workflow in the corner
  • Undo/Redo: Full change history with keyboard shortcuts
  • Copy/Paste: Duplicate individual nodes or entire subflows
  • Grid snap: Alignment guides for clean layouts

The editor view alone is 5,086 lines of React (WorkflowView.jsx). Each node type has its own dedicated editor modal — 10+ specialized components that give users full control over configuration without overwhelming them.

20+ Node Types Across 6 Categories

Every node is designed for real production use, not demo purposes. Here's the full taxonomy:

AI Nodes (5 types)

  • AI Agent: Chat with any LLM (Gemini, GPT-4o, Claude, DeepSeek) with configurable system prompt, temperature, max tokens, and context injection from previous nodes
  • AI Image Generator: Generate images with FLUX, DALL-E 3, Imagen 3 — output goes directly to R2 storage
  • Semantic Search: Query embeddings stored in Vectorize — returns ranked results with similarity scores
  • AI Voice Call: Initiate an AI-powered phone call via Twilio + ElevenLabs with conversation script
  • AI Classifier: Classify text into categories — useful for ticket routing, sentiment, intent detection

Communication Nodes (5 types)

  • WhatsApp Sender: Send via official API or local Desktop Agent (with full multimedia support)
  • Email: Send templated emails via Resend with dynamic merge fields
  • TTS Generator: Generate audio files with ElevenLabs voices — useful for podcast intros, IVR, notifications
  • Voice Call: Place outbound calls via Twilio with script or live AI agent
  • Push Notification: Send push to web/mobile clients

Data Nodes (6 types)

  • HTTP Request: GET/POST/PUT/DELETE to any API with headers, auth, body, and response parsing
  • Data Table Query: Read, write, update, or delete rows in Cadences DATA_TABLE (D1)
  • Variable Set/Get: Define and read workflow variables that persist across nodes
  • JSON Transform: Map, filter, reshape JSON data between nodes
  • File Upload: Upload files to R2 storage — images, PDFs, audio, any binary

Control Flow Nodes (7 types)

  • Conditional: If/else branching with complex conditions (supports nested AND/OR logic)
  • Loop: Iterate over arrays — process each lead, each file, each result
  • Delay: Wait seconds, minutes, hours, or days — execution resumes automatically
  • Switch: Multi-branch based on value — like a dispatch table
  • Merge: Rejoin parallel branches into a single flow
  • Error Handler: Catch errors from any node, define fallback behavior
  • Parallel: Execute multiple branches simultaneously — results merge when all complete

Trigger Nodes (5 types)

  • Webhook: Activate on incoming POST — generates a unique URL per workflow
  • Schedule: Cron-based — every hour, daily, weekly, custom cron expression
  • Manual: One-click execution from the dashboard
  • Data Table Trigger: Activate when a DATA_TABLE row is created/updated/deleted
  • IoT Trigger: Activate when a sensor crosses a threshold (via IoT Hub desktop app)

Specialized Nodes

  • FHIR: HL7 FHIR integration for healthcare — read/write Patient, Appointment, DiagnosticReport resources
  • Scraper: Trigger a web scraping task from the Scraper desktop tool
  • IoT Command: Send commands to IoT devices (turn on/off, set value, read sensor)
  • Tester: Debug node — logs input data, adds assertions, useful for development

The Execution Engine: 7,073 Lines of Production Logic

The visual editor is the pretty part. The real engineering is in workflowExecutionEngine.js7,073 lines that handle every edge case of production workflow execution:

Architecture

Workflows don't execute in the browser. When triggered, the entire workflow definition is sent to a Cloudflare Worker that spins up a Durable Object for execution:

  • Cloudflare Worker: Receives the trigger, validates the workflow, creates the execution instance
  • Durable Object: Maintains persistent state across the entire execution — survives worker restarts, handles long-running flows (hours, days)
  • D1 Database: Two tables — workflow_executions (global state) and workflow_execution_steps (per-node results)
  • Real-time updates: WebSocket + SSE push execution progress to the connected client

Execution States

Every execution follows a state machine: queued → running → [waiting] → completed | failed | cancelled. The "waiting" state is key — it handles delay nodes (come back in 24h), human input nodes (wait for approval), and external event nodes (wait for webhook callback).

Step-by-Step Debugging

This is what separates a toy from a tool. Our workflow engine has full debugging capabilities:

  • Breakpoints: Set on any node — execution pauses, you inspect all variables
  • Step Over: Execute one node at a time, see inputs and outputs
  • Inspect: View the exact data flowing into and out of every node in every run
  • Error History: Full stack trace with the node that failed, the input it received, and what went wrong
  • Replay: Re-execute from any specific node — no need to re-run the entire flow

The Variable System: Data Flows Between Nodes

Variables are first-class citizens. Every node's output is accessible to subsequent nodes via template syntax:

  • {{trigger.data.name}} — Data from the trigger event
  • {{node_3.output.text}} — Output of a specific node by ID
  • {{variables.counter}} — User-defined variables set during execution
  • {{env.API_KEY}} — Environment variables (secrets, config)
  • {{date.today}} — Dynamic date/time values
  • {{random.uuid}} — Generated unique identifiers

The variable system supports type inference — when you connect an AI node's output to a conditional node, the editor knows the output is text and offers text-specific comparisons. When a Data Table node outputs a number, you get numeric operators.

Real Workflows Running in Production

Lead Nurturing — Travel Agency

Webhook (new lead from website)
  → AI Classifier (hot / warm / cold)
    ├── HOT → AI Voice Call (contact immediately)
    │         → Data Table (register in CRM)
    │         → WhatsApp (send personalized proposal)
    ├── WARM → Email (nurturing sequence)
    │          → Delay (3 days) → Email (follow-up)
    └── COLD → Data Table (archive)
               → Schedule (re-evaluate in 30 days)

IoT Monitoring — Industrial Client

IoT Trigger (temperature > 30°C)
  → Conditional (business hours?)
    ├── YES → WhatsApp (alert team)
    │         → IoT Command (activate ventilation)
    └── NO  → Email (night alert)
              → IoT Command (activate ventilation)
              → AI Voice Call (call on-duty manager)
  → Data Table (log incident)

Multi-Channel Content — Weekly Automation

Schedule (Monday 9:00 AM)
  → AI Agent (generate blog topic)
  → AI Agent (write article)
  → AI Image (generate cover image)
  → Parallel
    ├── HTTP (publish to blog CMS)
    ├── WhatsApp (share with subscribers)
    ├── Email (weekly newsletter)
    └── HTTP (post to social media)
  → Data Table (register publication)

Patient Reminders — Imaging Center

Schedule (daily 8:00 AM, process appointments for +48h)
  → Data Table (get tomorrow's appointments)
  → Loop (for each appointment)
    → FHIR (fetch patient details)
    → AI Agent (generate reminder in patient's language)
    → WhatsApp Sender (send reminder)
    → Conditional (delivery failed?)
      ├── YES → Email (fallback)
      └── NO  → Data Table (log: delivered)
  → Data Table (daily summary report)

Error Handling That Actually Works

In production, things fail. Constantly. An API returns 500, a WhatsApp message doesn't deliver, an AI provider times out. Our engine handles this at three levels:

  • Node-level retry: Each node can define its own retry count (1-5) and backoff strategy (linear, exponential)
  • Error Handler nodes: Catch errors from specific nodes and define alternative flows — send a Slack notification, log the error, try a different provider
  • Workflow-level timeout: If the entire execution exceeds a configurable time limit, it's marked as failed with a summary of what completed and what didn't

Every failed execution preserves the full state. You can inspect exactly which node failed, what data it received, and retry from that point — not from the beginning.

The Numbers

📄 Execution Engine: 7,073 lines (workflowExecutionEngine.js)

🎨 Visual Editor: 5,086 lines (WorkflowView.jsx)

🧩 Node Types: 20+ across 6 categories

Canvas Performance: 60fps zoom/pan with 500+ nodes

🔧 Node Editors: 10+ specialized modal components

💾 Backend: Cloudflare Workers + Durable Objects + D1

🏥 Sectors using it: Healthcare, travel, real estate, retail

The Real Insight

After building all of this, here's what we've learned: the value of "no-code" isn't that it's easy. It's that it's visible. When a non-technical project manager can look at a workflow diagram and understand exactly what happens at each step — what triggers it, what data flows where, what happens when something fails — that's when automation stops being a scary black box and becomes a team asset.

The 7,073 lines of engine code exist so that the person building the workflow never has to think about Durable Objects, D1 transactions, WebSocket reconnection, or retry backoff strategies. They just drag, connect, and deploy. The complexity is hidden — but it's there, handling every edge case, 24/7.

Tags

Workflows No-Code Automation Visual Programming Durable Objects Canvas API Production

About the Author

Gonzalo Monzón

Gonzalo Monzón

Founder & Lead Architect

Gonzalo Monzón is a Senior Solutions Architect & AI Engineer with over 26 years building mission-critical systems in Healthcare, Industrial Automation, and enterprise AI. Founder of Cadences Lab, he specializes in bridging legacy infrastructure with cutting-edge technology.

Stay in the loop

Get notified when we publish new articles about AI automation, use cases, and practical guides.