From Floppy Disks to Edge Computing: 25 Years of Healthcare Data
Gonzalo Monzón
Founder & Lead Architect
I started working in healthcare IT in 1997. My first task was maintaining HP 9000 servers running Micronetics MSM — a multidimensional key-value database with its own language called MUMPS. Everything ran on terminals via Telnet and Xterm. The network was coaxial Ethernet. TCP/IP was "that internet thing." To move patient data between a primary care center and the hospital, a technician physically drove a floppy disk every morning.
Today, our platform Cadences runs healthcare modules on Cloudflare Workers — globally distributed, serverless, with queries returning in under 2ms. The database is SQLite on the edge. The total infrastructure cost is $5/month.
This is the story of everything that happened in between. Not from a textbook — from the trenches.
1997-2001: The Terminal Era
When I say "healthcare IT in the late 90s," most people imagine Windows applications and SQL databases. The reality was far more interesting — and far more alien to modern developers.
The Database That Was Also an Operating System
Micronetics MSM (later acquired by InterSystems and evolved into Caché) was not just a database. It was a complete runtime environment: database engine, application server, and programming language all in one. The language was MUMPS (Massachusetts General Hospital Utility Multi-Programming System), designed in 1966 for healthcare data. By 1998, it was already 32 years old — and running the majority of hospital information systems in Spain.
MUMPS stored data in globals — hierarchical, multidimensional key-value structures. Think of them as deeply nested trees where every node is both a value and a branch point:
^PATIENT("12345","NAME") = "García López, María"
^PATIENT("12345","ALLERGIES",1) = "Penicillin"
^PATIENT("12345","ALLERGIES",2) = "Ibuprofen"
^PATIENT("12345","VISITS","20000115","LAB","GLUCOSE") = "95"
No schema. No tables. No SQL. Just trees of keys and values that could be traversed programmatically using $ORDER — a primitive that returned the next sibling key at any level. This was NoSQL thirty years before anyone coined the term.
The Infrastructure
The physical setup at a typical Catalan hospital in 1998:
- HP 9000 servers running Unix with MSM/MUMPS as the database and application layer
- Coaxial Ethernet — not TCP/IP; those older LAN protocols that predated the internet's dominance in corporate networks
- Terminal access via Telnet and Xterm — green-on-black screens, keyboard-only navigation, zero graphical interface
- Mirror servers with journaling replication — every transaction was logged and replayed on a standby server. This is conceptually the same as PostgreSQL's WAL streaming, but running in the mid-90s on proprietary hardware
- DDP (Distributed Data Processing) connections between servers on the same network — think of it as inter-process communication, but across machines
- Novell NetWare + Windows NT file servers for the few Windows applications that were starting to appear (mostly administrative)
32 Ports, 25 Analyzers, and a Lot of RS-232
The hospital laboratory was where the real engineering happened. We had a 32-port RS-232/parallel card installed in the server, connecting approximately 25 different lab analyzers — hematology, biochemistry, urinalysis, immunology — each speaking its own serial protocol.
Every blood sample result traveled from the analyzer through a serial cable, through the RS-232 card, into a MUMPS routine that parsed the manufacturer-specific protocol, validated the values, and stored them in the appropriate global. If a glucose result came back at 450 mg/dL, the routine would flag it and the system would alert the doctor on their terminal — in real time, 1998-style.
This was IoT before IoT existed. No HTTP, no JSON, no MQTT. Just serial ports, byte streams, and handcrafted parsers for every single device model.
The Floppy Disk Express
Here's the part that makes modern developers laugh — or cry.
Catalan hospitals in the late 90s had a network of CAPs (Centres d'Atenció Primària — primary care centers) scattered across their territory. These centers had their own small MSM installations, but there was no reliable network connection to the hospital. Some had dedicated modem lines, but many didn't.
The solution? A technician, a car, and a floppy disk.
Every morning, a technician drove a route through the primary care centers. At each stop, they ran an export routine that dumped the day's visits onto a 3.5" diskette — 1.44 MB of patient encounters, compressed in a proprietary format. They then drove to the hospital and loaded the data into the central system, which used it to organize the movement of physical patient files (actual paper folders with clinical histories) and coordinate referrals.
When the modem failed — and modems always failed — the floppy disk was the fallback. The sneakernet was the CDN of the 90s.
The Human Layer: What We Lost
Something that never appears in technical retrospectives is the relationship model of that era. Every week, we went physically to each hospital for maintenance. Four hours, eight hours, sometimes two or three days at larger centers.
During those visits, we didn't just fix servers. We sat down with:
- The IT coordinator — often a repurposed administrative who had learned MUMPS by necessity
- Medical department heads — from surgery to emergency care to dialysis
- The hospital manager (gerente) — who needed reports and knew what the board wanted
- Administrative staff — who used the terminals 8 hours a day and knew every shortcut, every bug, every workaround
We understood the business because we were physically embedded in it. When the emergency department said "the triage screen is too slow," we could watch them work, count the seconds, and understand that "slow" meant "a nurse has to wait 3 seconds with a bleeding patient in front of her."
That context is irreplaceable. Modern remote IT support is efficient, scalable, and completely blind to the human cost of a 3-second delay in a trauma bay.
2001-2006: The Industrial Detour (And the SCADA Written in Lisp)
Between 2001 and 2006, I ran my own company with up to 20 employees, working on industrial automation — PLC programming, warehouse robotics for SEAT Martorell, and sports timing systems for AMB-IT/MyLaps. But the most technically fascinating project was the SCADA system for Aguas de Reus (the water utility of Reus, Tarragona).
That SCADA ran on HP-UX workstations connected to Sixnet PLCs (programmed in C) communicating via packet radio. The supervisory software itself was written in C, Lisp, and TCL/TK — yes, a real-time industrial control system with a UI built in TCL/TK and business logic in Lisp. The Sixnet PLCs only had four threads, but they could coordinate among themselves and report to the central SCADA in near real-time.
In 2008, I replaced that original SCADA with Schneider Vijeo Citect, but with a twist: the underlying database had been migrated from HP-UX with MSM to InterSystems Caché, and I built custom ODBC connectors so Vijeo Citect could access the same data structures that MSM had used — without changing any of the PLC communication layer. The same pattern I would later apply in healthcare: replace the interface, preserve the data layer, don't break what's already working.
2006-2012: The Caché Era and the 250-Client Marathon
Simultaneously with the industrial work, I returned to healthcare in 2006. The landscape had evolved — MSM had become InterSystems Caché, the same multidimensional engine but with an ObjectScript layer, a web server, and tentative SQL support bolted on top.
My role: supporting approximately 250 clients — clinics and hospitals — on a 24/7 rotation. When a Caché corruption hit a production global at 3 AM and the hospital couldn't access patient records in the ER, I was the one on the phone.
Building SQL Dashboards Over Key-Value Data
The clients needed business intelligence, but their data lived in MUMPS globals — no schema, no tables, no foreign keys. Two reputable Barcelona consulting firms had previously attempted to build BI solutions for the hospital network and failed. They tried connecting via ODBC and running complex queries as if Caché were a standard SQL database. It wasn't. The performance was catastrophic and the results were garbage because of the multidimensional data structure.
My approach was different: extract directly from the globals, transform in transit, and land in SQL Server. I used Pentaho Data Integration (Spoon/PDI) as the ETL engine — the best open-source option at the time. Hundreds of transformation diagrams, each one mapping a specific global structure to a relational schema.
On top of SQL Server, I built an operational BI interface using Backbone.js (the grandfather of modern MV* frameworks) and Java. This wasn't a static dashboard — users could trigger their own data refreshes through the UI, powered by trigger tables that launched ETL jobs. Self-service BI in 2013, before the term was trendy.
The Migration: A Strangler Fig That Took 4 Years
In 2013, I was hired directly by the hospital network — the same client I'd been supporting from the vendor side. The mission: migrate the entire Hospital Information System (HIS) from Caché/MUMPS to a modern stack.
I knew both sides intimately. I had done the Y2K migration for many of these hospitals. I had maintained Caché for years. I knew why every piece of data was stored the way it was — because I had helped put it there a decade earlier.
Building a Bridge Between Two Worlds
The migration strategy was what Martin Fowler later popularized as the Strangler Fig Pattern — gradually replacing functionality while both systems coexisted. Except textbooks describe this in the abstract. I had to make it work with live patient data in 8 hospital servers.
Step 1: Replicate MUMPS primitives in Python. I implemented the same traversal operations ($ORDER, $GET, $DATA) as Python functions that called Caché classes through InterSystems' C++ bridge. This gave me native-speed access to the globals with Python's flexibility for building APIs.
Step 2: Build a REST API layer. Using Bottle (a lightweight Python web framework), I exposed clean REST endpoints — /api/patients, /api/visits, /api/results — that translated between the modern world and the multidimensional data underneath. This turned a sealed legacy monolith into a service-oriented architecture.
Step 3: Bidirectional Change Data Capture (CDC). This was the hardest and most critical piece. I needed both systems to reflect each other's changes in near-real-time:
- Legacy → New: I "hooked into" Caché's mirror replication mechanism — the same journaling system used for server redundancy — to read an event stream of all data operations. A Python process consumed this stream and replicated the changes into SQL Server, where the new Angular-based HIS could read them.
- New → Legacy: SQL Server triggers fed a separate events table. Another Python process consumed those events and, using my MUMPS-primitive mappings, wrote the changes back into Caché — exactly as the legacy application would have.
The latency? 100 milliseconds.
At 100ms round-trip, a doctor could modify a patient record in the old terminal system and see it instantly in the new Angular interface — and vice versa. The probability of a data conflict (two people modifying the same field within 100ms) was statistically zero in a clinical workflow.
The Numbers
- 8 Caché servers across different hospital centers, all migrated progressively
- 40 TB of unstructured data cleaned, transformed, and migrated — clinical notes, malformed PDFs, base64-encoded images, decades of accumulated medical records
- 4 years of coexistence — both systems running in parallel, synced bidirectionally at 100ms
- HL7 and DICOM integrations rebuilt through Mirth Connect and Python
- 3.5 years of 24/7 on-call support — because nobody else understood both the legacy and the new system at the level required to keep them in sync
One day, the 8 Caché servers were shut down for good. The Strangler Fig had consumed the tree. The migration was complete.
The Pattern That Repeats
After the hospital migration, I moved to Werfen (SysteLab division) — a global leader in in-vitro diagnostics — where I faced another legacy challenge: a hemodynamics system (MediVector) built in Delphi with an extreme Entity-Attribute-Value (EAV) model. No "patients" table, no "visits" table — just tables of "objects," "attributes," "values," and "relationships." A Universal Data Model designed by aerospace engineers (the software originated from a company that built components for the Space Shuttle).
I built the same kind of bridge: SQL functions that denormalized the EAV model in real-time, exposed through a .NET Core REST API, with Delphi's business logic preserved via DLL interop for writes. The CQRS pattern — reads through SQL, writes through the legacy engine — emerged naturally from the constraint. That system still runs today in 40+ hospitals across Spain, including Vall d'Hebron, La Paz, and the SAS network.
The pattern is always the same: understand the legacy primitives, build a translation layer, expose modern APIs, coexist until the old system can be retired.
2024-2026: The Edge Era
Today at Cadences Lab, our healthcare module (Heartbeat) runs on Cloudflare Workers with D1 (SQLite on the edge). Let's compare the architectures across 25 years:
| Aspect | 1998 | 2013 | 2025 |
|---|---|---|---|
| Database | MUMPS globals (key-value) | SQL Server + Caché (hybrid) | D1/SQLite (edge) |
| Data transport | Floppy disk / modem | TCP/IP + CDC events | HTTP/2 to nearest PoP |
| Query latency | ~50ms (local terminal) | ~100ms (CDC sync) | ~2ms (edge query) |
| Device integration | RS-232 serial (custom parsers) | HL7/DICOM via Mirth | FHIR REST + webhooks |
| Servers | HP 9000 (physical, on-prem) | 8 Caché + SQL Server cluster | 0 (serverless) |
| Monthly cost | ~$50K (hardware + staff) | ~$15K (infra + licenses) | $5 (D1 + Workers) |
| Maintenance | 2 techs, half the week on-site | 1 architect, 24/7 on-call | Deploy and forget |
| Backup | Tape + mirror journaling | SQL Server backup + scripts | D1 time travel (30 days) |
The most striking part? The data model has come full circle. MUMPS globals in 1998 were key-value. D1 in 2025 is SQLite — at its core, a B-tree key-value store with SQL syntax on top. We went from key-value, through relational, and back to key-value with a SQL interface. The 1998 engineers weren't wrong about the data model. They were just 25 years early on the deployment model.
What 25 Years of Healthcare Data Taught Me
1. The fundamentals never change. In 1998, I needed low latency, data integrity, real-time synchronization, and fault tolerance. In 2025, I need exactly the same things. The tools evolve — from RS-232 to WebSockets, from journaling to WAL, from sneakernet to CDN — but the engineering principles are identical.
2. Legacy systems aren't stupid — they're survivors. MUMPS is still running in major hospitals and banks worldwide because it works. That "ugly" $ORDER traversal is algorithmically identical to a Redis SCAN. The engineers who designed these systems in the 1960s understood data access patterns that modern developers are rediscovering through NoSQL and key-value stores.
3. Every successful migration I've done followed the same pattern: learn the legacy primitives deeply, build an interop layer that speaks both languages, coexist for as long as necessary, and retire the old system only when the new one has proven itself under production load. There are no shortcuts. The consultants who try to do a "big bang" rewrite always fail — I watched it happen twice before I was brought in to fix the mess.
4. The human layer matters more than the tech layer. The weekly on-site maintenance visits of the 90s gave us something irreplaceable: context. We understood that a 3-second delay wasn't a performance metric — it was a nurse standing in front of a bleeding patient. Modern remote-first IT is efficient and scalable, but we've lost something in the transition. At Cadences Lab, we compensate by staying radically close to the user's workflow — not through office visits, but through real-time telemetry and direct communication channels.
5. The cost curve is the real revolution. From $50K/month in hardware and staff to $5/month on the edge, with better performance, global availability, and automatic backups. This isn't incremental improvement — it's a fundamental change in what's possible. A solo developer today can deploy healthcare infrastructure that would have required a 10-person team and $600K/year in the 90s. That's the real story of the last 25 years.
Tags
About the Author
Gonzalo Monzón
Founder & Lead Architect
Gonzalo Monzón is a Senior Solutions Architect & AI Engineer with over 26 years building mission-critical systems in Healthcare, Industrial Automation, and enterprise AI. Founder of Cadences Lab, he specializes in bridging legacy infrastructure with cutting-edge technology.
Related Articles
Why We Use 7 AI Providers (Not Just One) — And How We Track Every Cent
Vendor lock-in is a trap. Here's how our AI Gateway routes 11,200+ calls/month between Gemini, GPT-4o, Claude, DeepSeek, Groq, and more — with automatic fallback, cost tracking to the cent, and a ~$184/month total AI bill across 7 providers.
Deploying a Full Radiology System on the Edge: 13 Modules, Zero Servers
We built a complete RIS — DICOM viewer, MedGemma AI diagnosis, HL7 FHIR, digital signatures, CASS billing — running 100% on Cloudflare. Rated 9/10 by an expert radiologist who's used Siemens, Philips and Epic. Here's the full technical story.
Edge Computing: Why We Bet Everything on Cloudflare (And What $65/Month Gets You)
No servers, no containers, no Kubernetes. We run 14+ interconnected products on 9 Cloudflare products — Workers, D1, R2, Durable Objects, Pages, KV, Vectorize, Workers AI and WAF. $65/month for what would cost $400-600 on AWS. Here's the full architecture.