Heartbeat: A Proactive Health Engine That Thinks Before You Ask
Gonzalo Monzón
Founder & Lead Architect
Most health apps wait for you to do something — log a meal, check a metric, ask a question. Heartbeat flips that model entirely. It's a proactive health engine that beats periodically, analyzes your full context through AI, and generates recommendations, alerts, and actions before you ask. Think of it as a background health intelligence that never stops thinking about you.
Heartbeat isn't an app — it's the engine underneath apps. It's the foundation we built NutriNen Baby on, and the base for future health applications targeting seniors, fitness, and mental health. This article covers the architecture: User DNA profiles, Deep Pulse analysis, crisis simulation, the development sandbox, and the UI Action Resolver that translates AI decisions into interface commands.
The Heartbeat Metaphor (It's Literal)
The name isn't decorative. The system literally beats — executing periodic analysis cycles just like a heart pumps blood:
Trigger (time / event / manual)
│
├── Gather context
│ ├── User DNA profile
│ ├── Last 20 heartbeats
│ ├── Sensor data (weight, temp, sleep, glucose)
│ └── Gamification state (XP, level, streaks)
│
├── AI Analysis (Deep Pulse)
│ ├── Health evaluation
│ ├── Trend detection (improving/worsening)
│ ├── Risk assessment
│ └── Action generation
│
├── UI Action Resolver
│ └── Translate decisions → cards / alerts / navigation
│
└── Persist results
├── D1 (heartbeat log)
├── User DNA (updated)
└── Gamification state
Each beat enriches the user's profile, detects trends, assesses risks, and generates concrete actions — all without user input. The user just sees smart cards appearing: "Your baby needs an afternoon snack", "Gluten detected in last meal — alert", "15-day streak! Badge unlocked."
User DNA: Living Profiles That Evolve
Every user has a comprehensive JSON profile — their "DNA" — that evolves with every interaction:
{
"demographics": { "age": 8, "stage": "toddler" },
"health": { "allergies": ["gluten"], "conditions": [] },
"preferences": { "vegetarian": true, "spicy_level": 0 },
"history": { "meals_logged": 342, "streaks": 15 },
"gamification": { "level": 12, "xp": 4500 },
"devices": { "wearable": true, "scale": true }
}
Key design decisions:
- Auto-enrichment — every interaction updates the DNA. Log a meal? Preferences refine. Skip meals for 2 days? History flags it. Unlock a badge? Gamification state updates.
- Predefined personas — Baby, Celiac, Student, Senior. Quick-start profiles that give the AI immediate context without a lengthy onboarding flow.
- Full editability — in the Heartbeat Studio sandbox, developers can manually edit any DNA field to test how the engine responds to different user states.
The DNA is what makes Heartbeat's recommendations actually useful. The AI doesn't know "a generic user" — it knows this specific person with gluten allergies who's vegetarian and has logged 342 meals with a 15-day streak at level 12.
Deep Pulse: Strategic Health Analysis
Deep Pulse is the brain of each heartbeat — a multi-layered AI analysis that evaluates the user's current state and generates actions:
| Component | Function | Output |
|---|---|---|
| Health Analysis | Current health state evaluation | Health score (0-100) |
| Trend Detection | Improving, stable, or worsening patterns | Delta from last 5 heartbeats |
| Risk Assessment | Future risk evaluation based on patterns | Risk level + specific concerns |
| Action Generation | Concrete recommended actions | Cards, alerts, navigation commands |
| Travel Engine | Geographic context for recommendations | Location-aware suggestions |
The Travel Engine is particularly interesting — it adjusts recommendations based on the user's location. Traveling to a region known for specific food allergens? The engine proactively warns. Different timezone disrupting meal schedules? It adjusts its timing recommendations.
Crisis Simulation: Stress-Testing the AI
How do you know your health engine responds correctly when things go wrong? You simulate crises.
The Crisis Simulation module runs predefined stress scenarios:
- Allergy detection — "User ate food containing allergen X. What does the engine do?"
- Data gaps — "No data for 48 hours. How does the engine respond?"
- Emergency conditions — "Abnormal vital signs detected. What's the escalation?"
- Broken streaks — "User broke a 30-day streak. How does gamification respond?"
- Stress testing — "Run 20+ heartbeats consecutively at 2-second intervals. Any degradation?"
Each scenario measures response time, decision quality, and whether the engine generates the correct UI actions. It's essentially integration testing for AI behavior — not just "does the code run?" but "does the AI make the right decision under pressure?"
Heartbeat Studio: The Development Sandbox
Heartbeat Studio is where developers build and test the engine. It has six panels:
| Panel | Function |
|---|---|
| Profile Selector | Select and edit User DNA profiles |
| Trigger Box | Inject events (time, sensors, user actions) |
| AI Thought Viewer | Raw JSON visualization of AI reasoning |
| UI Preview | Renders cards as they would appear in the app |
| Metrics Dashboard | Health score, game state, deltas in real-time |
| Heartbeat Log | Timeline of last 20 heartbeats with filters |
The Trigger Box is the power tool. It lets you inject events that provoke heartbeats:
- "Advance 6 hours" — simulates time passage
- "Inject glucose reading: 180" — simulates sensor data
- "User logged pasta" — simulates a meal containing potential allergens
- "Auto-heartbeat mode" — fires heartbeats every 2 seconds for stress testing
- "Run scenario: 48h no data" — triggers the crisis simulation
Three execution modes cover different development needs: Mock (simulated data, fast iteration), API (real Cloudflare Workers backend, persistent data), and Hybrid (mock with selective API validation for spot-checking).
UI Action Resolver: AI Thoughts → Interface Commands
The bridge between AI reasoning and user-visible actions:
AI decides: "show allergy alert"
│
└── UI Action Resolver translates to:
{
"action": "show_alert",
"type": "allergy_warning",
"severity": "high",
"content": "Gluten detected in last meal"
}
Action types: show_card, show_alert, navigate, update_score, unlock_badge, play_sound, show_celebration. The AI generates high-level intentions; the Resolver maps them to concrete UI commands that any frontend app can consume.
This decoupling is what makes Heartbeat an engine rather than an app. NutriNen Baby renders these actions as cute baby-themed cards with particle effects. A future fitness app could render them as workout notifications. Same engine, different personality.
Card Schemas: Context-Aware UI
The engine generates cards with different visual schemas depending on context:
| Schema | Usage | Example |
|---|---|---|
| Nido (Nest) | Home, feeds, routines | "Your baby needs their afternoon snack" |
| Escudo (Shield) | Alerts, protection | "Alert: food containing gluten" |
| Biblioteca (Library) | Info, education | "Article: Introduction to BLW" |
| Invernadero (Greenhouse) | Progress, growth | "15-day streak! 🌱" |
Each schema carries visual hierarchy, urgency level, and interaction patterns. Escudo cards always take priority (health/safety first). Nido cards are warm and routine. Biblioteca cards are dismissible. Invernadero cards are celebratory with animations.
Engine API
The engine exposes a clean REST API from Cloudflare Workers:
| Endpoint | Method | Function |
|---|---|---|
/api/heartbeat/pulse | POST | Execute a heartbeat with context |
/api/heartbeat/profile | GET/PUT | Read/update User DNA |
/api/heartbeat/history | GET | Heartbeat history |
/api/heartbeat/simulate | POST | Run crisis simulation |
/api/heartbeat/actions | GET | Pending actions for the user |
Hosted on Cloudflare Workers with D1 for persistent storage. The entire engine runs at the edge — heartbeat analysis happens in the region closest to the user, with sub-100ms latency for most operations.
Key Takeaways
1. Proactive beats reactive. Health apps that wait for user input miss the entire point. A system that analyzes context and generates recommendations autonomously provides value even when users forget to open the app.
2. User DNA makes AI recommendations personal, not generic. The difference between "eat healthy" and "your baby with gluten allergy needs a vegetarian afternoon snack" is 342 logged meals of context. Rich user profiles transform generic AI into genuinely useful guidance.
3. Crisis simulation is integration testing for AI judgment. You can't just test if the code runs — you need to test whether the AI makes the right decisions when a user reports eating an allergen or when data goes silent for 48 hours.
4. Engines, not apps, are the right abstraction. By building Heartbeat as a reusable engine, NutriNen Baby got health intelligence for free. Future apps (senior care, fitness, mental health) will too. The engine handles AI, profiles, and decisions; apps handle personality and UI.
5. The UI Action Resolver is the key decoupling layer. AI generates abstract intentions. The Resolver maps them to UI commands. Apps render those commands however they want. This three-layer separation (AI → Actions → UI) is what makes the engine truly reusable.
Tags
About the Author
Gonzalo Monzón
Founder & Lead Architect
Gonzalo Monzón is a Senior Solutions Architect & AI Engineer with over 26 years building mission-critical systems in Healthcare, Industrial Automation, and enterprise AI. Founder of Cadences Lab, he specializes in bridging legacy infrastructure with cutting-edge technology.
Related Articles
Why We Use 7 AI Providers (Not Just One) — And How We Track Every Cent
Vendor lock-in is a trap. Here's how our AI Gateway routes 11,200+ calls/month between Gemini, GPT-4o, Claude, DeepSeek, Groq, and more — with automatic fallback, cost tracking to the cent, and a ~$184/month total AI bill across 7 providers.
Deploying a Full Radiology System on the Edge: 13 Modules, Zero Servers
We built a complete RIS — DICOM viewer, MedGemma AI diagnosis, HL7 FHIR, digital signatures, CASS billing — running 100% on Cloudflare. Rated 9/10 by an expert radiologist who's used Siemens, Philips and Epic. Here's the full technical story.
Synapse Studio: A 2D Virtual Office Where AI Agents Do the Real Work
We built a SimTower-style animated office where AI agents with multimodal capabilities — vision, image generation, web search, iterative image evolution — collaborate on real tasks. Zero dependencies, pure Vanilla JS, running on Cloudflare.