RS-232, Packet Radio, and a SCADA Written in Lisp: What Modern Devs Can Learn from the 90s
Gonzalo Monzón
Founder & Lead Architect
In 2024, a typical "serverless function" has 150+ npm dependencies, runs on a V8 isolate provisioned in 50ms, processes a JSON payload, queries a distributed database, and returns a response in 200ms. Nobody involved in this process could tell you how TCP works at the byte level. In 1998, I maintained a SCADA system for a water utility where the control logic was written in Lisp, the UI was TCL/TK, the PLCs were programmed in C with four threads, and they communicated via packet radio. There were no dependencies to install. There was no build step. Everything worked.
These are war stories from the last decade of the 20th century — the era before abstractions ate everything. Not as nostalgia, but as engineering lessons that the modern stack has systematically forgotten.
The SCADA That Ran on Lisp
The Aguas de Reus (water utility of Reus, Tarragona) operated a network of pumping stations, treatment plants, and distribution points across the region. They needed to monitor and control all of it in real time — water levels, pressures, flow rates, pump status, valve positions, chemical dosing.
The system architecture:
- Sixnet PLCs at each remote station — programmed in C, with only four execution threads. These PLCs read sensors (4-20mA analog inputs, digital inputs), controlled actuators (pumps, valves), and communicated upstream. Four threads. That's it. You had to be surgical about what ran when
- Packet radio for communication — not TCP/IP, not cellular, not satellite. Radio modems transmitting data packets between remote stations and the central control room. The bandwidth was measured in kilobits, not megabits. Every byte mattered
- HP-UX workstations at the control center — HP's Unix variant, running on PA-RISC hardware that cost more than most cars at the time
- SCADA software in C, Lisp, and TCL/TK — the data acquisition and control logic in C and Lisp, the operator interface in TCL/TK
Why Lisp for Industrial Control?
This seems bizarre by today's standards — Lisp in a SCADA? But it made profound engineering sense. The SCADA needed a rules engine to evaluate alarm conditions, calculate derived values, and make control decisions based on complex combinations of sensor readings. Lisp is, fundamentally, a language for symbolic computation and rule evaluation.
Consider a typical alarm rule: "If the reservoir level drops below 30% AND either pump A or pump B has been running for more than 4 hours AND the inflow rate is less than 50 L/min, trigger a high-priority alert and start the backup pump." In C, this is a nested mess of if-else statements that becomes unmaintainable as rules multiply. In Lisp, it's a data structure that can be composed, modified, and evaluated dynamically.
The TCL/TK layer was the operator interface — real-time mimic diagrams showing the water network, with color-coded pipes (blue for flowing, red for alarm, grey for offline), trend charts, alarm lists. TCL/TK was lightweight, could render graphics efficiently on the HP-UX X11 display, and could be scripted rapidly when the operators needed a new view of the system.
Four Threads and a Radio
The Sixnet PLCs deserve their own appreciation. Each one had four threads of execution and a packet radio modem. In modern terms, you'd need to run your entire application — sensor polling, actuator control, local alarm logic, and upstream communication — on what amounts to a Raspberry Pi Zero with a walkie-talkie.
Programming these PLCs meant understanding timing at a level that most modern developers never encounter:
- Thread 1: Sensor polling loop — reading analog inputs every 100ms, debouncing digital inputs
- Thread 2: Control loop — executing PID controllers and safety interlocks
- Thread 3: Communication — assembling data packets, managing the radio protocol (with retransmission logic, because packet radio is unreliable), handling incoming commands from the central SCADA
- Thread 4: Local alarm processing and data logging to onboard memory (because the radio might be down for hours)
Every byte in the communication protocol was carefully allocated. There was no JSON. There was no protocol overhead. A typical status packet was 64 bytes — station ID, timestamp, 20 sensor values as 16-bit integers, 8 digital status bits, a checksum. That's it. Compare that to a typical IoT sensor payload today: 2KB of JSON with string keys, redundant metadata, and a JWT token that's larger than the actual data.
The Evolution: From Lisp to Vijeo Citect
In 2008, I replaced the original C/Lisp/TCL-TK SCADA with Schneider Vijeo Citect — a modern commercial SCADA platform. But the migration had a constraint: the underlying database had been migrated from HP-UX with MSM (a MUMPS-based multidimensional database) to InterSystems Caché, and I had to preserve access to the same data structures.
The solution: custom ODBC connectors that let Vijeo Citect read and write to Caché using the same key-value paths that MSM had used. The PLCs, the radio communication, the field wiring — none of it changed. Only the supervisory layer was replaced. Same data, same field devices, new interface.
This is a principle I've applied in every legacy migration since: change one layer at a time. Never change the data model and the interface and the communication layer simultaneously. If something breaks, you need to know which layer caused it.
The Hospital Lab: 25 Analyzers on a Serial Bus
While the SCADA operated in the industrial world, I was simultaneously maintaining hospital IT systems. The laboratory was where both worlds collided — it was pure industrial automation embedded in a healthcare context.
The RS-232 Garden
A hospital laboratory in 1998 was a room full of specialized machines — analyzers — each designed to measure specific things in blood, urine, or tissue samples. A 32-port RS-232/parallel card installed in a server connected approximately 25 different analyzers:
- Hematology analyzers — complete blood counts, red and white cell differentials
- Biochemistry analyzers — glucose, cholesterol, liver enzymes, electrolytes
- Urinalysis stations — automated dipstick and microscopy
- Immunology/serology analyzers — antibodies, hormones, tumor markers
- Coagulation analyzers — PT, aPTT, fibrinogen
Each analyzer spoke its own serial protocol — different baud rates, different data formats, different handshaking requirements, different result encodings. There was no standard. Every manufacturer had their own dialect.
For each analyzer model, someone had written a MUMPS routine — a custom serial parser that:
- Opened the RS-232 port at the correct baud rate (1200, 2400, 9600, or 19200)
- Handled the handshaking protocol (some used ETX/ACK, some used proprietary sequences)
- Parsed the incoming byte stream into discrete results
- Validated the results against expected ranges
- Stored them in the appropriate MUMPS global, linked to the correct patient and order
- Flagged critical values for immediate clinician notification
If a glucose result came back at 450 mg/dL (dangerously high), the MUMPS routine would immediately update the patient's chart and trigger an alert on the doctor's terminal. This was IoT before IoT existed — machine-to-machine communication with real-time alerting, running on RS-232 serial lines in 1998.
The Debugging Reality
When a lab analyzer stopped reporting results — which happened regularly — the debugging process was pure hardware-software archaeology:
- Check the physical RS-232 cable (connectors loosened, cables chewed by cleaning carts)
- Check the port assignment (32 ports, easy to mix up during maintenance)
- Connect a serial terminal or protocol analyzer to the port and watch the raw bytes scrolling
- Compare the byte pattern to the protocol documentation (if it existed — sometimes you reverse-engineered from scratch)
- Test if the MUMPS parsing routine was correct by manually feeding it known byte sequences
- If all else failed, call the analyzer manufacturer, describe the byte pattern, and negotiate a fix
This was full-stack in the Most literal sense — from the electrical signal on Pin 2 (TxD) to the patient record on the doctor's screen, you were responsible for every layer.
The Floppy Disk: Sneakernet as Architecture
In the late 90s, connecting primary care centers to the hospital network was often impossible. No DSL. No fiber. Sometimes not even a reliable phone line for a modem. The solution was what network engineers call the sneakernet — data transfer by physical transport.
A technician drove a route every morning, stopping at each CAP (Centre d'Atenció Primària) to copy the day's visits onto a 3.5" floppy disk. At the hospital, the data was loaded into the central system to organize patient file logistics and referral coordination.
The sneakernet had properties that modern distributed systems engineers would find familiar:
- Batch processing: Data was collected once daily in bulk — the original "batch ETL"
- Idempotency: The load routine had to handle duplicates (what if the technician ran the export twice?)
- Error recovery: If a diskette was corrupted (and 3.5" disks corrupted often), the fallback was yesterday's data plus a phone call to the center for critical cases
- Bandwidth: Andrew Tanenbaum was right — "Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway." A 1.44 MB floppy held more data than that modem could transfer in an hour
The sneakernet was not a failure of technology. It was a pragmatic architectural decision — the most reliable data transfer mechanism given the infrastructure constraints. Modern systems make the same trade-off: S3 bulk imports, physical Snowball devices for petabyte transfers, USB drives for air-gapped networks.
What the 90s Got Right
1. Zero Dependencies
The Sixnet PLC code was pure C. The SCADA logic was Lisp evaluated by a custom engine. The MUMPS routines were self-contained programs in a self-contained runtime. There were no dependency trees, no package managers, no supply chain vulnerabilities.
When you updated the PLC code, you knew exactly what changed — because you wrote every line. When a MUMPS routine broke, the problem was in the routine, not in a transitive dependency six levels deep that some maintainer in another country abandoned.
Today, a "simple" Node.js function to read a sensor value might pull in 200+ packages. One of them will be deprecated next month. One has a CVE nobody's patched. The 90s approach wasn't scalable, but it was understandable — and understandable systems are debuggable systems.
2. Every Byte Was Earned
When your communication channel is a packet radio with kilobit bandwidth, you don't send {"sensorId": "PUMP_STATION_14_RESERVOIR_LEVEL", "value": 67.5, "unit": "percent", "timestamp": "2024-03-14T10:30:00Z"}. You send 0x0E 0x43 0x87 — station 14, value 675 (implied decimal), checksum.
This discipline forced engineers to think about what information actually matters. The result: systems that were fast, efficient, and resilient to network degradation. When the radio dropped packets, you lost 64 bytes. Today, when an API call fails, you lose 2KB of JSON metadata that nobody reads anyway.
3. Physical Presence Created Understanding
We went to the hospitals every week. We sat with the doctors. We watched the nurses use our software. When someone said "this screen is slow," we didn't open a Jira ticket — we stood behind them and counted the seconds. We understood that "slow" in an emergency department means "a patient might be dying while I wait for this screen to load."
Modern remote-first development is efficient and scalable, but it creates a distance between the engineer and the consequence of their work. When your deploy is a wrangler deploy to a global CDN, it's easy to forget that somewhere, a nurse is tapping a screen and waiting for your code to respond.
4. Simplicity as Engineering Discipline
A Sixnet PLC with four threads isn't simple because the problems are simple — water treatment involves complex chemistry, safety interlocks, and real-time control. It's simple because the constraints forced simplicity. You couldn't add a library. You couldn't spawn another thread. You had to solve the problem within the box.
Modern developers have the opposite problem: unlimited resources encourage unlimited complexity. When you can add any npm package, run any number of microservices, and scale horizontally forever, the discipline of "do I actually need this?" disappears. The 90s engineers asked that question for every byte, every instruction, every wire.
The Bridge to Today
I'm not arguing we should go back to RS-232 and floppy disks. The tools we have today — serverless functions, globally distributed databases, AI models, real-time streaming — are genuinely better. But the engineering principles that made 90s systems reliable are exactly the principles that modern systems lack:
| 90s Principle | Modern Equivalent (Often Missing) |
|---|---|
| Understand every byte on the wire | Understand your protocol overhead and payload efficiency |
| Zero dependencies = zero supply chain risk | Minimize dependencies, audit what you include |
| Design for unreliable networks | Design for network partitions and degraded mode |
| Physical presence with users | Real-time telemetry and direct feedback channels |
| Four threads — solve it or don't | Question every abstraction: do I actually need this? |
| One person owns the full stack | Full-stack ownership reduces coordination overhead |
At Cadences Lab, we build with Cloudflare Workers, SQLite, AI agents, and modern web standards. But the philosophy comes from RS-232 and Lisp — understand the primitives, minimize the layers, own the full stack, stay close to the user. The tools changed. The engineering didn't.
Tags
About the Author
Gonzalo Monzón
Founder & Lead Architect
Gonzalo Monzón is a Senior Solutions Architect & AI Engineer with over 26 years building mission-critical systems in Healthcare, Industrial Automation, and enterprise AI. Founder of Cadences Lab, he specializes in bridging legacy infrastructure with cutting-edge technology.
Related Articles
Edge Computing: Why We Bet Everything on Cloudflare (And What $65/Month Gets You)
No servers, no containers, no Kubernetes. We run 14+ interconnected products on 9 Cloudflare products — Workers, D1, R2, Durable Objects, Pages, KV, Vectorize, Workers AI and WAF. $65/month for what would cost $400-600 on AWS. Here's the full architecture.
SQLite Is the Production Database You Already Know (You Just Don't Know It Yet)
DHH hit 30,000 concurrent writers on SQLite. We run 14+ products on Cloudflare D1 (SQLite at the edge) for $5/month. SQLite isn't the toy database you think it is — it's powering everything from 37signals' ONCE to our entire multi-tenant platform. Here's why the industry is converging on the world's most deployed database.
Vanilla JS Is the Assembly Language of the Browser (And That's Why We Use It)
Gmail ships 20MB of JavaScript. Slack ships 55MB. Our cookie consent system? 4KB, zero dependencies. When you understand the low-level primitives — vanilla JS, pure CSS, raw DOM APIs — you don't need a framework to tell you what the browser already knows. But we're not framework haters: we use Astro for SSG and React islands where they genuinely help. The difference is that we choose our tools — they don't choose us.