<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>Casey Link's Weblog</title><link href="https://casey.link/atom/articles" rel="self" type="application/atom+xml" /><link href="https://casey.link" rel="alternate" type="text/html" /><id>tag:casey.link,2022:/atom/articles</id><updated>2025-12-19T00:00:00Z</updated><author><name>Casey Link</name></author><entry><title>Reusable NixOS images on Hetzner Cloud</title><link href="https://casey.link/blog/nixos-hetzner/" rel="alternate" type="text/html" /><id>tag:casey.link,2025-12-19:/blog/nixos-hetzner/</id><published>2025-12-19T00:00:00Z</published><updated>2025-12-19T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Deploy NixOS configurations to Hetzner Cloud, fast.</summary><content type="html"><![CDATA[<div><p>Hetzner is a price-competitive and conceptually simpler alternative to AWS and the other hyperscalers for the small orgs and teams that <a href="https://outskirtslabs.com">I tend to work with</a>.</p><p>NixOS is a declarative, reproducible operating system that turns 'works on my machine' into 'works on every machine' - infrastructure-as-code that's finally achievable for the lean teams that I tend to work with.</p><p>But Hetzner doesn't ship NixOS images, just the standard debian, ubuntu, and rhel clones.</p><p>Most folks resort to <a href="https://guillaumebogard.dev/posts/declarative-server-management-with-nix/">using nixos-infect</a> or <a href="https://joinemm.dev/blog/nixos-hetzner-cloud">nixos-anywhere</a> to transmogrify a Debian or Ubuntu instance into NixOS. The more ambitious reach for <a href="https://developer-friendly.blog/blog/2025/01/20/packer-how-to-build-nixos-24-snapshot-on-hetzner-cloud/">Packer and rescue mode</a>, which requires munging around manually with Hetzner's rescue mode.</p><p>All three share the same fundamental ritual: provision a VM, SSH in, and overwrite its soul with NixOS. These approaches work, but you pay the conversion tax every time you spin up a new VM - and the demons don't work for free nor are they particularly fast.</p><p>But the winds have shifted and three things have fallen into place:</p><p>First, <a href="https://github.com/apricote/hcloud-upload-image">hcloud-upload-image</a> was released in 2024. It's a simple Go tool that handles the soul conversion for you. hcloud-upload-image takes a disk image as input and side effects Hetzner with enough conviction that a Snapshot materializes.</p><pre><code class="language-bash">hcloud-upload-image upload \
--image-path  result/nixos-image-25.11.x86_64-linux.img \
  --architecture x86
... long wait while dark arts are performed...
Uploaded Image: 123467
</code></pre><p>Second, <a href="https://stephank.nl/">Stéphan Kochen</a> (<a href="https://github.com/stephank">gh</a>) opened <a href="https://github.com/NixOS/nixpkgs/pull/375551">nixpkgs PR #375551</a> to bring native Hetzner Cloud image building into nixpkgs. The PR packages hcloud-upload-image, adds a NixOS image builder config, and includes his own <a href="https://github.com/stephank/systemd-network-generator-hcloud">systemd-network-generator-hcloud</a> tool for IPv6 autoconfiguration.</p><p>Third, <a href="https://docs.determinate.systems/flakehub/cache">FlakeHub Cache</a>, available since late 2024, changes how you deploy NixOS configurations. Normally, deploying a NixOS config means evaluating the entire flake on the target machine - slow, memory-hungry, and painful on a cheap VPS. FlakeHub pre-computes <a href="https://docs.determinate.systems/flakehub/store-paths/">resolved store paths</a> when you publish your flake, so <code>fh apply nixos</code> skips evaluation entirely. Push your config from CI, deploy to your server with one command, and the configuration arrives in seconds rather than minutes.</p><p>Separately, these tools solve pieces of the puzzle. Together, they enable a workflow that didn't exist before: Building reusable NixOS images for Hetzner Cloud!</p><p>I made a flake that packages these pieces into a ready-to-use solution for Hetzner Cloud images. I wrote the glue; the people above wrote the magic. The code is at <a href="https://github.com/outskirtslabs/nixos-hetzner">outskirtslabs/nixos-hetzner</a>.</p><p>As of writing, PR #375551 is still open. My flake vendors the relevant pieces so you don't need to wait for merge in the meanwhile.</p><div class="sidenote-container"><p>The images can be built for both x86_64-linux and aarch64-linux, and come bundled with <a href="https://docs.determinate.systems/determinate-nix">Determinate Nix</a> and the <a href="https://docs.determinate.systems/flakehub/cli">FlakeHub CLI</a>. Using it looks like this:<a id="fn1" class="sidenote-ref" href="#fnref1" role="doc-noteref"><sup data-label="note2">1</sup></a></p><div class="sidenote-column"><span id="fnref1" class="sidenote" role="doc-footnote"><sup class="sidenote-number">1.</sup>Surprisingly, Hetzner's ARM instances (CAX series) no longer have a price advantage over the Intel/AMD CX series. In fact they are more expensive from 0.50-1.00.<a class="text-inherit" role="doc-backlink" href="#fn1"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><pre><code class="language-bash">HCLOUD_TOKEN=...your hcloud token...
ARCH=x86_64-linux # or aarch64-linux
HCLOUD_ARCH=x86   # or arm

nix build "github:outskirtslabs/nixos-hetzner#diskImages.$ARCH.hetzner" --print-build-logs

# inspect the image
ls result/*
IMAGE_PATH=$(ls result/*.img 2&gt;/dev/null | head -1)

# upload to hetzner cloud
hcloud-upload-image upload \
    --image-path="$IMAGE_PATH" \
  --architecture="$HCLOUD_ARCH" \
  --description="nixos-hetzner image"
</code></pre><p>With that you have a NixOS Snapshot in your Hetzner Cloud console that you can clickops into a fresh VM. Once you've booted a VM from the image, you can authenticate with FlakeHub and use <code>fh apply</code> to deploy configurations directly.</p><p>FlakeHub Cache makes this fast - <em>really fast</em>. Instead of rebuilding or waiting for slow binary cache downloads, your configurations deploy in seconds.</p><p>You'll need to bring your own paid Hetzner and FlakeHub accounts of course.</p><p>For a more complete example, with Terraform/Opentofu and Github Actions, check out the <a href="https://github.com/outskirtslabs/nixos-hetzner-demo">outskirtslabs/nixos-hetzner-demo</a>. It builds on nixos-hetzner and showcases a full continuous deployment methodology with NixOS.</p><div class="sidenote-container"><p>And, FYI, going into Q1/Q2 of 2026, my consulting calendar still has openings: If you are a small to medium org or team who needs a devops assist or Clojure full-stack reinforcement, <a href="https://outskirtslabs.com/#contact">get in touch</a>. <a id="fn2" class="sidenote-ref" href="#fnref2" role="doc-noteref"><sup data-label="note1">2</sup></a></p><div class="sidenote-column"><span id="fnref2" class="sidenote" role="doc-footnote"><sup class="sidenote-number">2.</sup>Happy holidays folks, I'm hoping for a 2026 where Hetzner makes all of the above redundant by just supporting NixOS natively.<a class="text-inherit" role="doc-backlink" href="#fn2"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fnixos-hetzner%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fnixos-hetzner%2F">Bluesky</a></p>]]></content></entry><entry><title>ol.client-ip: A Clojure Library to Prevent IP Spoofing</title><link href="https://casey.link/blog/client-ip-ring-middleware/" rel="alternate" type="text/html" /><id>tag:casey.link,2025-08-18:/blog/client-ip-ring-middleware/</id><published>2025-08-18T00:00:00Z</published><updated>2025-08-18T00:00:00Z</updated><author><name>Casey Link</name></author><summary>A clojure ring middleware for extracting client IPs from HTTP headers without the usual security vulnerabilities.</summary><content type="html"><![CDATA[<div><p>Getting the IP address of a client making a request to your web application sounds like it should be embarrassingly simple. It's not.</p><p>If you think it's just a matter of checking <code>:remote-addr</code> in your request map or pulling a value out of the X-Forwarded-For header, you've just opened yourself up to IP spoofing vulnerabilities that would make a pentester giggle with delight.</p><p>There is no easy, or simple, solution. The problem is inextricably tied with your specific deployment environment: whether you're behind a reverse proxy, a CDN (or both), which headers your infrastructure uses, and crucially, which ones you can actually trust. None of this information lives in the HTTP request itself. You need out-of-band knowledge about your infrastructure topology to correctly parse the maze of forwarding headers without letting an attacker spoof their way past your IP-based security controls.</p><p>Armed with that knowledge you still need to correctly parse a tangled mess of headers, deal with IPv6 addresses with ports, know which header to trust when multiple ones exist, and implement all the validation logic to prevent spoofing attacks.</p><p>I've been wrestling with this problem for years across various client projects. After implementing the same buggy patterns over and over (and watching other developers do the same), I finally decided to solve this properly.</p><p><a href="https://github.com/outskirtslabs/client-ip"><code>ol.client-ip</code></a> is a Ring middleware (with zero deps!) that handles all this complexity so you don't have to. It's essentially a Clojure port of the excellent <a href="https://github.com/realclientip/realclientip-go">realclientip-go</a> implementation, which itself was born from the collective frustration of developers who were tired of getting this wrong.</p><h2 id="who-cares?">Who cares?</h2><blockquote><p>Why do we need IP-based security controls?</p></blockquote><p>Well you often don't! And yes, in today's landscape of easy VPNs, CGNAT, the client's ip address is not a bullet-proof identifier or security mechanism.</p><p>But sometimes it is very helpful for rate limiting, geographic compliance requirements, abuse prevention, audit trails, or fraud detection. All of these are made easier when you can, with some confidence, know the actual IP address making the request.</p><p>Please don't reach for IP-based controls first thing, they are problematic. But if you do, understand how to protect your self from trivial IP spoofing.</p><h2 id="why-this-is-actually-hard">Why This Is Actually Hard</h2><p>Here's the thing about modern web infrastructure: your application almost never talks directly to the actual client. There's usually a reverse proxy, or three. Possibly a load-balancer. Maybe a CDN. All of the above?</p><p>Each of these helpful intermediaries likes to "help" by adding headers telling you about the original client. The problem? Any client can also set these headers. Watch this:</p><pre><code class="language-bash">curl -H "X-Forwarded-For: 1.2.3.4" https://your-app.com
</code></pre><p>Does your app use <a href="https://github.com/http-kit/http-kit/">http-kit</a> as a web server and then look at <code>:remote-addr</code>? Boom. I just tricked your app in beliving I'm from IP 1.2.3.4. If you're using that IP for rate limiting, geolocation, or (heaven forbid) authentication decisions, you're in trouble.</p><p>(My PR for this <a href="https://github.com/http-kit/http-kit/pull/599">is here</a> here by the way, I hope it gets merged soon. It doesn't "solve" the problem completely it just makes it harder to shoot your self in the foot if you twiddle a non-default option.)</p><p>Often you hear advice like "just use the leftmost IP in X-Forwarded-For." This is terrible advice. It's trivially spoofable.</p><p>The slightly better advice is "use the rightmost IP that's not from your infrastructure." Getting warmer, but now you need to know what "your infrastructure" means, and that changes depending on your deployment.</p><h2 id="enter-ol.client-ip">Enter <a href="https://github.com/outskirtslabs/client-ip"><code>ol.client-ip</code></a></h2><p>The library takes a strategy-based approach because (and this is my whole schtick!) there is no one-size-fits-all solution. Your network topology determines the correct strategy. The library just makes it easy to implement that strategy correctly.</p><pre><code class="language-clojure">(ns myapp.core
  (:require [ol.client-ip.core :as client-ip]
            [ol.client-ip.strategy :as strategy]))

;; Behind Cloudflare? They provide a trustworthy header
(def app
  (-&gt; handler
      (client-ip/wrap-client-ip
        {:strategy (strategy/single-ip-header-strategy "cf-connecting-ip")})))

;; Behind exactly 2 proxies? Count backwards
(def app
  (-&gt; handler
      (client-ip/wrap-client-ip
        {:strategy (strategy/rightmost-trusted-count-strategy "x-forwarded-for" 2)})))

;; Many more strategies available see usage documentation.

;; Your handler now has access to the "real" † client IP
(defn handler [request]
  (let [client-ip (:ol/client-ip request)]
    {:status 200
     :body (str "I actually know your IP: " client-ip)}))
</code></pre><p>† "Real" is doing a lot of work here. The epistemological problem isn't just technical. We're trying to identify "the client" through layers of network abstraction, where each proxy and NAT boundary forces us to accept increasingly indirect evidence of the originating request. What <code>client-ip</code> provides is the earliest source address in the chain that you've declared trustworthy through your configuration. It's not knowledge of the "true" client (whatever that means in a world of shared connections and VPNs), but rather the best available proxy for client identity given your position in the network topology.</p><p>For better or worse, this approach forces you to think about your network topology. You can't just slap it in and hope for the best. You have to make a conscious decision about which strategy matches your setup.</p><h3 id="the-strategies-that-actually-matter">The strategies that Actually Matter</h3><p>After implementing this for various clients, I've found that 90% of use cases fall into three categories:</p><p><strong>Single trusted header</strong>: You're behind Cloudflare, Fly.io, or a <em>single</em> properly configured nginx. These services provide a header with the socket-level client IP that can't be spoofed (as long as clients can't bypass the proxy). This is the golden path if you have it.</p><p><strong>Rightmost non-private</strong>: You have proxies in your private network (10.x.x.x, 192.168.x.x, etc.) and they all append to X-Forwarded-For. The rightmost non-private IP is your client. This works great until you put a proxy with a public IP in the chain, at which point you need...</p><p><strong>Rightmost trusted count</strong>: You know exactly how many proxies are between the internet and your app. Count backwards that many IPs in the X-Forwarded-For chain. Simple, effective, but requires you to update the count if your infrastructure changes.</p><p>The library supports more strategies: trusted IP ranges, chain strategies for multiple paths, even the dangerous leftmost strategy for when you need to know what the client <em>claims</em> their IP is. But honestly? Start with these three.</p><h2 id="let&apos;s-shave-the-yak">Let's shave the yak</h2><p>This library needs to parse ip addresses. It needs to do comparisons like is the ip address "192.168.1.22" in the trusted subnet "192.168.0/24".</p><p>Seems easy enough? But, like, what is an ip address? What can show up there?</p><p>Consider these valid IP addresses that might show up in your headers:</p><ul><li><span><code>192.168.1.1</code> - Easy, IPv4</span></li><li><span><code>2001:db8::8a2e:370:7334</code> - IPv6, still manageable</span></li><li><span><code>[2001:db8::1]:8080</code> - IPv6 with port notation</span></li><li><span><code>fe80::1%eth0</code> - IPv6 with zone identifier (yes, that % is supposed to be there)</span></li><li><span><code>::ffff:192.0.2.1</code> - IPv4-mapped IPv6 address</span></li><li><span><code>[fe80::1%25eth0]:8080</code> - URL-encoded zone identifier with port</span></li></ul><p>And that's before malicious actors start sending you garbage like <code>definitely-not-an-ip.com</code> or <code>192.168.1.1.example.com</code> hoping to trigger interesting behaviors in your parser.</p><p>I originally had hoped to just lean on Java's <code>InetAddress.getByName()</code>, which can do ip address parsing if you pass it raw ip addresses. Seems innocent enough, right?</p><p>Wrong. That method will happily perform DNS lookups (it is kind of what it is supposed to do) which will block your request thread. Very bad for throughput.</p><p>So I had to implement an IP address parser to guarantee there were no side effects. It's in the namespace [ol.client-ip.ip][ip-parse]. It might come in handy elsewhere..</p><div class="sidenote-container"><p>It's a small detail, but it's the kind of thing you only learn after your app mysteriously hangs in production because someone sent you a malformed IP string that Java decided to resolve when DNS was on the fritz<a id="fn1" class="sidenote-ref" href="#fnref1" role="doc-noteref"><sup data-label="note1">1</sup></a>.</p><div class="sidenote-column"><span id="fnref1" class="sidenote" role="doc-footnote"><sup class="sidenote-number">1.</sup>It's not DNS. There’s no way it's DNS. It was DNS<a class="text-inherit" role="doc-backlink" href="#fn1"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><h2 id="should-you-use-this?">Should You Use This?</h2><p>If you're running a Clojure web app and you need to know client IPs (for analytics, rate limiting, geolocation, whatever), then, yea probably. At least read the documentation and existing literature do understand the problem space.</p><p>The library has zero dependencies beyond Clojure itself. It's about 1k SLOC (incl docstrings), thoroughly tested, and boring in all the right ways. It won't revolutionize your application, but it will prevent that awkward moment when you realize you've been rate-limiting your CDN instead of actual clients.</p><p>You can obtain <code>ol.client-ip</code> from <a href="https://clojars.org/com.outskirtslabs/client-ip">Clojars</a> or via a gitlib dep from the repo at <a href="https://github.com/outskirtslabs/client-ip">github.com/outskirtslabs/client-ip</a>.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fclient-ip-ring-middleware%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fclient-ip-ring-middleware%2F">Bluesky</a></p>]]></content></entry><entry><title>Taming Datomic Pro Deployment with Nix</title><link href="https://casey.link/blog/datomic-pro-flake/" rel="alternate" type="text/html" /><id>tag:casey.link,2025-01-24:/blog/datomic-pro-flake/</id><published>2025-01-24T00:00:00Z</published><updated>2025-01-24T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Simplifying Datomic Pro deployments for NixOS and non-NixOS systems.</summary><content type="html"><![CDATA[<div><p>I've been working with <a href="https://www.datomic.com/">Datomic</a> for years, and while I love its immutable, time-aware data model, deploying it has always been... an adventure. Unlike most modern databases that come with ready-to-use container images or OS packages, Datomic Pro arrives as a bare JAR file with some configuration examples.</p><p>If you are running it on bare metal or a standard Linux VM, then a custom systemd service file is all you need, but if you need to deploy it to a containerized environment you have a little work to do.</p><p>Datomic Pro is distributed as a Java application with a deployment model that requires manual assembly. It consists of a transactor (the server component), some storage backend (like a SQL db), and an optional console (web UI). Getting all these pieces working together requires:</p><ol start="1"><li><span>JVM setup and configuration</span></li><li><span>Various storage backend configurations</span></li><li><span>Secret management</span></li><li><span>Classpath wrangling for custom drivers</span></li><li><span>Coordination between different components</span></li></ol><p>And everyone ends up with custom shell scripts, Docker files, and deployment procedures that rapidly drift out of sync with the codebase. In other words, it's a perfect candidate for the reproducible deployment approach that NixOS offers.</p><p>Enter <a href="https://github.com/Ramblurr/datomic-pro-flake">datomic-pro-flake</a>, my attempt to bring sanity to Datomic Pro deployments using the power of Nix. If you've never heard of Nix, it's that declarative package manager that your one colleague won't shut up about. (Confession: I'm that colleague.)</p><h2 id="what&apos;s-in-the-flake?">What's in the Flake?</h2><p>The datomic-pro-flake project provides three main components:</p><ol start="1"><li><span><strong>Nix Packages</strong>: Pure, reproducible builds of Datomic Pro components (transactor, console, and peer library)</span></li><li><span><strong>NixOS Modules</strong>: Declarative configuration for running Datomic Pro on NixOS systems</span></li><li><span><strong>Container Images</strong>: Run Datomic with your favorite container orchestrator (mine is docker compose), no nix required!</span></li></ol><p>All three approaches are end-to-end tested in CI, which means you don't have to worry about whether they actually work. The tests exercise the package and module by booting a transactor, writing some datoms, and ensuring they can be read back. (I've spent enough late nights debugging non-functional database deployments for all of us.)</p><h2 id="the-nix-package-approach">The Nix Package Approach</h2><p>If you're already using Nix, this is probably what you're looking for. The flake provides:</p><pre><code class="language-nix"># In your flake.nix
inputs.datomic-pro.url = "https://flakehub.com/f/Ramblurr/datomic-pro/$LATEST_TAG.tar.gz";
</code></pre><p>The packages are configurable through Nix's override pattern, allowing you to add custom Java libraries or native dependencies.</p><p>This is particularly handy if you need specific JDBC drivers or want to integrate with exotic storage systems.</p><p>What makes this approach special? The entire packaging and deployment becomes declarative, reproducible, and easily version-controlled.</p><p>One particularly neat feature is the automatic JRE slimming. Since Datomic only needs specific JDK modules, we use <code>jdeps</code> to analyze exactly what's needed and create a minimal runtime environment. This reduces the package size by hundreds of megabytes.</p><h3 id="nixos-modules:-set-it-and-forget-it-(but-don&apos;t-actually-forget-it—this-is-a-database-after-all)">NixOS Modules: Set It and Forget It (But Don't Actually Forget It—This Is a Database After All)</h3><p>For those running NixOS systems, the modules provide a simple approach:</p><pre><code class="language-nix"># In your configuration.nix
services.datomic-pro = {
  enable = true;
  secretsFile = "/path/to/secrets";
  settings = {
    protocol = "sql";
    host = "0.0.0.0";
    port = 4334;
    # And any other Datomic settings... see README
  };
};
</code></pre><p>The module handles all the hardest parts:</p><ul><li><span>Systemd service configuration</span></li><li><span>Runtime property generation</span></li><li><span>Data directory setup</span></li><li><span>Classpath configuration</span></li><li><span>Ready to use with sops-nix, agenix, or whatever hand-rolled secrets management tool you have.</span></li></ul><p>And there's a companion <code>datomic-console</code> module for running the web UI with similarly straightforward configuration.</p><h2 id="container-images:-for-everyone-else">Container Images: For Everyone Else</h2><p>Not everyone is ready to drink the Nix Kool-Aid (though you should—it's delicious), so the flake also produces ready-to-use container images. Yes, good ol' Docker-compatible images that work exactly where you'd expect them to:</p><pre><code class="language-yaml">services:
  datomic-transactor:
    image: ghcr.io/ramblurr/datomic-pro:unstable
    environment:
      DATOMIC_PROTOCOL: sql
      DATOMIC_SQL_URL: jdbc:sqlite:/data/datomic-sqlite.db
      # ... more ....
    volumes:
      - ./data:/data
    ports:
      - 127.0.0.1:4334:4334
</code></pre><p>These images support both environment variables and file-based configuration, making them suitable for Kubernetes, Docker Compose, or other container orchestration systems. The <a href="https://github.com/Ramblurr/datomic-pro-flake/blob/main/README.md">README</a> includes examples for deploying with various storage backends, including Postgres and SQLite.</p><h2 id="real-world-usage">Real-World Usage</h2><p>I've had variations of this project in production for awhile, though it wasn't until relatively recently when Datomic Pro became totally free (as in beer) and the binaries released under the Apache 2.0 license that I felt comfortable making this public.</p><p>With the client projects I've used this on, it has significantly simplified deployments. The container image in particular "just works" and is grokkable by non-clojure non-jvm familiar operations folk.</p><p>FWIW the practical deployment patterns I've found useful:</p><ol start="1"><li><span><strong>Local Development</strong>: Use the container with a SQLite backend for quick local development</span></li><li><span><strong>Production (Single-Node)</strong>: Deploy with SQLite for simple single-node projects</span></li><li><span><strong>Production (Multi-Node)</strong>: Use PostgreSQL for scalable multi-node deployments</span></li></ol><p>(For testing you're using datomic in-mem dbs, right?!)</p><p>Each approach is documented in the README with concrete examples you can start from.</p><h2 id="future-directions">Future Directions</h2><p>The project is currently in a "stable but evolving" state. Before hitting 1.0, I'm planning to add:</p><ul><li><span>Better version pinning to prevent surprise database upgrades</span></li><li><span>Out of the box example for NixOS modules w/ Postgres</span></li><li><span>More storage backend examples and configurations</span></li><li><span>Integration with secrets management tools like sops-nix</span></li><li><span>Performance optimization for specific cloud environments</span></li></ul><h2 id="why-you-should-try-it">Why You Should Try It</h2><p>If you're using Datomic Pro, the NixOS module or container image might save you hours of configuration headaches and provide a more reliable deployment process.</p><p>Even if you're just curious about Datomic, it offers the easiest way to get started without wrestling with configuration files.</p><p>Interested? Check out the <a href="https://github.com/Ramblurr/datomic-pro-flake">GitHub repository</a>.</p><p>And if you have questions or suggestions, feel free to open an issue or reach out to me on the Clojurians Slack (@Ramblurr).</p><p>As with any database deployment, remember: test thoroughly before pointing it at production data. Your future self will thank you.</p><p>Happy datom accretion!</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fdatomic-pro-flake%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fdatomic-pro-flake%2F">Bluesky</a></p>]]></content></entry><entry><title>wayland-java, a library no one asked for</title><link href="https://casey.link/blog/wayland-java/" rel="alternate" type="text/html" /><id>tag:casey.link,2024-10-04:/blog/wayland-java/</id><published>2024-10-04T00:00:00Z</published><updated>2024-10-04T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Modern Java bindings for libwayland that let you create Wayland clients/servers without stepping outside the JVM.</summary><content type="html"><![CDATA[<div><p>...because someone had to bridge this particularly niche technological divide between JVM developer and the modern Linux desktop.</p><p><a href="https://github.com/ramblurr/wayland-java"><code>wayland-java</code>'s</a> a set of Java bindings for libwayland and wayland-protocols that lets you create <a href="https://wayland.freedesktop.org/">Wayland</a> client (and server!) applications in pure Java - without writing a single line of C code.</p><p>If you've ever needed to call into native libraries from Java, you've likely experienced the special kind of frustration that is JNI. It's verbose, error-prone, and feels like you're working in two entirely different languages at once... because you are. Since JDK 1.1, released in the late 90s, this has been the inevitable tax paid by JVM-language developers wanting to interface with native code.</p><p>With the release of JDK 22, we got the finalized Foreign Memory &amp; Memory API (formerly Project Panama). This is a serious upgrade for Java developers needing to interact with native code. The new <a href="https://docs.oracle.com/en/java/javase/22/core/foreign-function-and-memory-api.html">FFM APIs</a> mean: no more JNI ceremony, faster native interop, and no more brittle glue code in C, just a cleaner interface to the systems we need to talk to.</p><p>wayland-java leverages this new API to provide a proper Java interface to the Wayland protocol, making it possible to create graphical applications that run directly on Wayland compositors, while writing code in only Java (or other JVM hosted languages). The code reads like actual Java, because it is!</p><p>The codebase for this library isn't entirely new. It's a fork of Erik De Rijcke's 2015-era effort, which I've completely rewritten to use Project Panama for FFI. The original project was impressive in its own right, but the new FFM API makes this approach significantly cleaner and more maintainable.</p><p>However the project isn't production-ready, and probably won't be without more community or commercial interest. A client [funded me][ol] to create a proof-of-concept application using Wayland on Linux desktop through late 2024. The work was just R&amp;D and won't be moving forward (for reasons unrelated to the tech!)</p><p>While I cannot opensource the full PoC project, I was able to extract out the wayland bindings into this little library.</p><p>If you're curious, <a href="https://github.com/ramblurr/wayland-java">the project</a> provides several artifacts: client stubs, server stubs, shared stubs, a scanner tool that generates bindings from Wayland protocol XML descriptions, and a pre-packaged selection of protocols. You can generate your own protocols too, if the included ones don't meet your needs. The library isn't particularly high-level, you'll still need to understand wayland deeply.</p><p>I'm not expecting this library to take the world by storm. It's much more a case of "this exists, and if you need it someday, you won't have to suffer through building it yourself" The code is Apache-licensed, properly documented, and the build process won't make you question your career choices as a JVM developer. You'll need JDK 22+, Linux with libc, and libwayland available at runtime. (If you use nix, the included devshell will get it working out of the box)</p><p>Will the intersection of JVM developers and Wayland enthusiasts ever grow beyond dozens? Probably not. But for those few, perhaps this makes something possible that wasn't before.</p><p>And that's reason enough to build it.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fwayland-java%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fwayland-java%2F">Bluesky</a></p>]]></content></entry><entry><title>Fairybox: Building a Better Children's Music Box</title><link href="https://casey.link/blog/fairybox/" rel="alternate" type="text/html" /><id>tag:casey.link,2024-04-04:/blog/fairybox/</id><published>2024-04-04T00:00:00Z</published><updated>2024-05-30T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Creating a screen-free, RFID-powered audio player for my children using Raspberry Pi, Clojure, and a lot of soldering.</summary><content type="html"><![CDATA[<div><p>It's strange working in tech and being a parent. I sell software as a solution, but in so many ways software is the cause of strife and problems in our lives. And specifically I mean family life.</p><p>YouTube, Spotifiy, and friends have no place near my pre-school aged children. It's not that I'm anti-technology — obviously — but I've seen too many children zombie-scrolling to want that for my own.</p><p>When I was a child we had cassette tapes. Boxes and boxes of cassete tapes. My sister and I each had a cheap little Sony cassette player/boom box and a pair of those cheap over-ear headphones that have the thinnest black foam as ear pads.</p><p>We spent hundreds, possibly thousands, of hours listening to stories and music. Sometimes replaying the same track or story 50 times in a row. Annoying for parents yes, but repetitive listening has been shown to be very important for language development.</p><p>I want to <em>encourage</em> an interest in stories and music while letting my children have control of their media. In 2020s, how do I give a similar experience to my children without a sketchy algorithm or screens?</p><h3 id="commercial-off-the-shelf:-toniebox">Commercial Off the Shelf: Toniebox</h3><p>If you're not familiar with the Toniebox concept, it's brilliant in its simplicity: a colorful cube that plays audio when children place special figurines (called "Tonies") on top. No screens, no complicated interfaces — just place a character and hear a story. The figures use RFID chips that the box recognizes, triggering specific content that is cached on the box itself.</p><p>It really is a great product, however <em>cost</em> is a problem. The Toniebox itself plus a Tonie figure costs around €100, and every Tonie thereafter costs €18-25.</p><p>And that's just for German language Tonies. My children are being raised bilinguaglly in English and German, with German naturally dominant given where we live.</p><p>Importing English language Tonies from the UK bumps the cost of a Tonie up to €25-30 (thanks Brexit!), and those are British-English. I'm not trying to be snobbish here, but we speak American English and home and I don't think it is unreasonable to want to make the same dialect available to my kids in their audio diet.</p><p>Importing Tonies from the USA? Now you are looking at €35-40 per Tonie. Yikes.</p><div class="sidenote-container"><p>As an amateur maker, the answer seemed obvious, I needed to DIY this myself<a id="fn1" class="sidenote-ref" href="#fnref1" role="doc-noteref"><sup data-label="f1">1</sup></a>.</p><div class="sidenote-column"><span id="fnref1" class="sidenote" role="doc-footnote"><sup class="sidenote-number">1.</sup>This economic argument falls apart if you calculate total time spent, material cost, etc.<a class="text-inherit" role="doc-backlink" href="#fn1"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><p>My goals were simple:</p><ol start="1"><li><span>Create a device my child could use independently</span></li><li><span>Keep it screen-free (for the child, anyway)</span></li><li><span>Use open source tech</span></li><li><span>Make it robust enough to survive a preschooler</span></li></ol><h3 id="clojure,-libvlc,-and-the-event-driven-rabbit-hole">Clojure, LibVLC, and the Event-Driven Rabbit Hole</h3><p>I decided to build the project in Clojure, partly because I love the language, but mostly because its concurrency model with core.async channels makes building an event-driven system a joy rather than a nightmare. I'd never actually used core.async as the central abstraction in a project before. It works (and might be elegant?), but it can turn into a bit of a tangled mess if you're not careful with your channels.</p><pre><code class="language-clojure">;; The pub/sub at the heart of fairybox
(defn emit! [bus topic value]
  (let [pub-ch (:publisher bus)]
    (async/put! pub-ch {:topic topic :value value})))

;; And how to subscribe to events
(defn subscribe [bus topic]
  (let [sub-ch (async/chan 10)
        pub (:publication bus)]
    (async/sub pub topic sub-ch)
    sub-ch))

;; Example usage in a component
(defn start-button-listener [bus button-gpio action]
  (let [events (subscribe bus :buttons)]
    (go-loop []
      (when-let [event (&lt;! events)]
        (when (= (:gpio event) button-gpio)
          (emit! bus :player/commands {:action action}))
        (recur)))))
</code></pre><p>Every component of the system — the RFID reader, the buttons, the LEDs, the audio player — communicates through this event system. When your toddler places an RFID card on the reader, it sends an event that gets picked up by the audio component, which then plays the associated content. Simple, elegant, robust.</p><p>One decision I'm particularly proud of was keeping the media player in-process using LibVLC rather than shelling out to a separate player. The Java bindings for LibVLC are surprisingly comprehensive and work well, even though they're excruciatingly Java-y with their <code>AbstractFactoryBuilderImplementationFactoryImpl</code> naming conventions. But hey, at least I didn't have to write the JNI bindings myself.</p><pre><code class="language-clojure">;; Using LibVLC to play media without spawning external processes
(defn play! [player url]
  (-&gt; player
      (.mediaPlayer)
      (.media)
      (.play url nil)))
</code></pre><h3 id="the-web-ui:-htmx-and-websockets-for-the-win">The Web UI: HTMX and Websockets for the Win</h3><p>While the device itself is deliberately screen-free for my child, I still needed a way for parents to configure it. Enter HTMX and websockets — a surprisingly powerful combination that let me build a responsive web interface with minimal JavaScript.</p><p>The UI lets me assign RFID cards to specific audiobooks or playlists, control volume, and see what my kid is listening to. It's accessible from any device on our home network, which means I can easily change settings or help troubleshoot from my phone while my child is using the device. The interface updates in real-time when buttons are pressed on the physical device, making the whole thing feel cohesive (which, considering my partner and I are the only users, is probably an excessive level of polish, but whatever).</p><pre><code class="language-clojure">;; A taste of the HTMX magic
(defn audio-controls [req]
  [:div#audio-controls.audio-controls
   {:hx-ext "ws" 
    :ws-connect "/ws/audio-controls"}
   [:div.audio-title 
    [:span#title "Ready to play"]]
   [:div.controls-row
    [:button.control-button 
     {:hx-ws "send" :hx-vals {:action "prev"}}
     (icon/prev)]]])
</code></pre><p>The websocket connection keeps the UI in sync with the device state. When my daughter hits the physical "next track" button, the web UI updates instantly to show the new track. It's a small touch, but it makes the entire experience feel seamless.</p><h3 id="hardware-woes:-the-nixos-experiment-that-broke-my-spirit">Hardware Woes: The NixOS Experiment That Broke My Spirit</h3><p>I've been using NixOS for years on my servers and Linux boxes, not just as a development environment. I thought, "Wouldn't it be elegant to deploy the Fairybox on NixOS too?" This, friends, is what we call technological hubris of the highest order.</p><p>What followed was a weeks-long battle with device tree overlays, GPIO permissions, and the peculiarities of the Raspberry Pi 4. It turns out that while NixOS runs beautifully on the Pi, accessing GPIO pins and using device tree overlays is... let me put it this way: if you ever want to feel truly humbled by your own incompetence, try getting a NixOS RPi4 to talk to an RFID reader over SPI while also driving PWM LEDs.</p><pre><code class="language-nix"># A painful snippet from my failed NixOS experiment
hardware.deviceTree = {
  enable = true;
  filter = lib.mkForce "bcm2711-rpi-4-b.dtb";
  overlays = [
    {
      name = "spi0-1cs-overlay";
      dtsText = builtins.readFile ./overlays/spi0-1cs-overlay.dts;
    }
  ];
};
</code></pre><p>I banged my head against this wall for what seemed like eternity. I'd get one component working, only to find that another stopped. After exhausting every forum post, GitHub issue, and Discord chat I could find, I admitted defeat. I returned to a more conventional Raspberry Pi OS setup and deployed everything with a simple Ansible playbook. Sometimes the boring solution is the right one.</p><h3 id="hardware-iterations-and-soldering-disasters">Hardware Iterations and Soldering Disasters</h3><p>Let me tell you something about toddlers: they're walking chaos engines. No matter how well you think you've secured your hardware connections, a determined 3-year-old will find a way to jostle something loose.</p><p>The Fairybox went through several hardware iterations, especially the power solution. The first version used a standard USB power bank, but the Raspberry Pi would occasionally brown out when playing audio at high volumes. The second version used a LiPo battery with a simple boost converter, which lasted longer but still had stability issues.</p><p>For the final version, I went full overkill and got a KWeld spot welder (a whole other DIY project) to build a custom battery pack, paired with the AmpRipper 4000 charge controller/boost converter. This gives me a stable 5V 3A power supply that can run for hours and recharge quickly.</p><p>Despite years of practice, my soldering skills remain firmly in the "functional but ugly" category. Every time I open up the Fairybox to fix something, I'm confronted with blobs of solder that look like they were applied by someone wearing boxing gloves. But hey, they conduct electricity, and that's what counts, right?</p><h3 id="the-final-product:-a-box-full-of-joy">The Final Product: A Box Full of Joy</h3><p>The finished Fairybox sits in my daughter's room (when she's not carrying it around the apartment), a simple wooden box with colorful LED buttons. She places an RFID card on top, the lights dance in acknowledgment, and her favorite audiobook begins playing. She can press the big, friendly buttons to pause, skip tracks, or adjust volume.</p><figure class="image"><img src="./fairybox2@2x.webp" alt="The finished Fairybox with colorful LED buttons and a Whinnie the Pooh RFID figure on top"></figure><p>What amazes me is how quickly she mastered it. There's something wonderfully intuitive about physical interfaces — no menus to navigate, no apps to open, just tangible cause and effect. (Though I should admit "intuitive" is stretching it a bit — it took her a couple weeks of placing cards and repeatedly asking "What does this button do again?" before she fully got the hang of it.)</p><p>Now my youngest is starting to get jealous, giving me the perfect excuse to build version 2.0. I've already started gathering components and thinking about improvements. I'm slightly terrified at the prospect of having two audio devices wandering around our not-very-large apartment, potentially blaring different stories at maximum volume simultaneously. I may need to include some kind of proximity sensor that lowers the volume when they get too close to each other.</p><h3 id="beyond-the-build:-technology-that-respects-childhood">Beyond the Build: Technology That Respects Childhood</h3><p>This project has reinforced my belief that technology doesn't have to mean screens and apps. We can build digital tools that respect the developmental needs of children — tools that encourage imagination rather than passive consumption.</p><p>As someone who builds technology for a living, I feel a responsibility to be thoughtful about the digital environments I create, especially for the most vulnerable users. My clients are human rights organizations trying to make the world better. Shouldn't I apply the same principles at home?</p><p>The Fairybox isn't just a toy — it's a statement about the kind of technology I want in my children's lives: respectful, empowering, and imagination-enhancing rather than attention-extracting.</p><p>And if you're wondering if all this effort was worth it when I could have just bought something off the shelf — the look on my daughter's face when she first used something Dad built answers that question perfectly.</p><figure class="image"><img src="./fairybox1.webp" alt="Photo of my child using the  Fairybox"></figure><p>Now I just need to finish the second one before my youngest figures out how to stage a toddler coup.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Ffairybox%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Ffairybox%2F">Bluesky</a></p>]]></content></entry><entry><title>Cleaning House: Retiring My Old Travel Blogs</title><link href="https://casey.link/blog/cleaning-house/" rel="alternate" type="text/html" /><id>tag:casey.link,2022-09-15:/blog/cleaning-house/</id><published>2022-09-15T00:00:00Z</published><updated>2022-09-15T00:00:00Z</updated><author><name>Casey Link</name></author><summary>A reflection on taking down my old travel blogs and how both the internet and I have changed.</summary><content type="html"><![CDATA[<div><p>If you've found yourself here through an old link to <code>elusivetruth.net</code> or <code>binaryelysium.com</code>, I should probably explain: those blogs no longer exist.</p><p>I've taken them down, redirected the domains, and generally done some digital housekeeping. Apologies for dead-end.</p><hr><p>For those wondering (there can't be many of you), <code>binaryelysium.com</code> and <code>elusivetruth.net</code> were my open-source dev and travel blogs from my early-to-mid twenties. Binary Elysium started with my first excursions into open source around 2006 and chronicled my days as a KDE contributor.</p><p>When I began traveling full-time I found an audience interested in my travelogue, so Elusive Truth was spun out as a standalone blog. Both blogs drifted for while I also drifted along with a laptop, doing contract work while experimenting with various forms of travel.</p><p>This period of time culminated in my traversal of Europe from the North Sea to the Black Sea using nothing but human power: pedaling my bike and paddling my packraft.</p><figure class="image"><img src="blacksea1@2x.webp" alt="A man giving thumbs up next to a blue inflatable raft with a bicycle strapped on top at a beach."><figcaption class="text-center mt-1">My arrival at the Black Sea after traversing Europe with bike and boat</figcaption></figure><p>I completed that journey in 2014, and had my sights set further east, but as is our way, settled down, found a partner, and started a family.</p><p>The internet has changed dramatically since those early blogging days. What was once a more unfiltered, personal, and decidedly less commercial space has evolved into something sanitized, algorithmic, monetized, and focused on "content."</p><p>My old posts feel like artifacts from a different era. Not just an era of of my life, but of the web itself. An era when people wrote purely for expression, no algorithms or brands in sight. I did at least (though my contributions were objectively mediocre).</p><p>My trans-European journey was partly inspired by Patrick Leigh Fermor's book "A Time of Gifts," a memoir of hiking across pre-WWII Europe in the 1930s. A musty book despite being "only" from the '70s. The narrative is of a privileged white guy struggling with the self-imposed misery of traipsing through Europe with letters of invitations to high society in his pocket but holes in shoes.</p><p>Huh. 🤔 His story is similar to mine in most aspects. It lacks the pre-war intrigue. The letters of invitation are swapped for Couchsurfing/Warmshowers. But white and privileged, check check.</p><p>It is embarrassing reading your own writing from a decade past, especially when it's infused with the naivety characterized by that age.</p><p>Though they say the internet never forgets, I've found that's only partially true. If you're determined and know how to use the Wayback Machine, you could probably unearth some of my old writings and photos.</p><p>But honestly, I wasn't an exceptional writer, and there were (and are) far better blogs and books on the subject. I'm at peace with that realization now, in a way my younger self might have found disappointing.</p><p>Rather than read me, here are a few authors that influenced my deeply during those wandering years. Their work deserves to be revisited far more than mine, especially if you're someone looking to go on an adventure.</p><p>You can find them preserved at <a href="https://onvagrancy.com">onvagrancy.com</a></p><p>My favorite piece remains <a href="https://onvagrancy.com/isabelle/vagrancy.html">"On Vagrancy" by Isabelle Eberhardt</a>, a remarkable French iconoclast who defied every convention of her era - in her 20s she was dressing as a male Arab, converting to Islam, and living as a nomadic writer in the Maghreb Sahara.</p><p>Was it a coincidence that I was around the same age when her writing gripped me so completely? Probably not, but she tragically died at 27 years old in a flash flood. Meanwhile from nearly-40 I can see what she never got the chance to.</p><figure class="image"><img src="eberhardt@2x.webp" alt=""><figcaption class="text-center mt-1">One of the few photos of Isabelle Eberhardt</figcaption></figure><p>I can't let this reflection end without mentioning "The Gentle Art of Tramping" by Stephen Graham, a 1924 book which was my road bible, and I think has something for everyone.</p><blockquote><p>"The tramp is a friend of society; he is a seeker, he pays his way if he can... Tramping is a way of approach, to Nature, to your fellowman, to a nation, to a foreign nation, to beauty, to life itself."</p></blockquote><figure class="image"><img src="gentle.webp" alt="A person with a walking stick stands on a winding path leading through hills toward distant mountains under a bright sun, with another figure walking ahead."></figure><p>The digital detritus of our past selves accumulates so quickly now. Perhaps there's value in the occasional pruning—making space for the work that continues to resonate. The words of Eberhardt and Graham still do. My early blogs don't.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fcleaning-house%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fcleaning-house%2F">Bluesky</a></p>]]></content></entry><entry><title>Unifi USG Raspberry Pi LTE Failover</title><link href="https://casey.link/blog/rpi-usg-4g-failover/" rel="alternate" type="text/html" /><id>tag:casey.link,2020-10-23:/blog/rpi-usg-4g-failover/</id><published>2020-10-23T00:00:00Z</published><updated>2020-10-23T00:00:00Z</updated><author><name>Casey Link</name></author><summary>How to build a DIY 4G/LTE failover solution for Unifi USG using a Raspberry Pi and a USB dongle.</summary><content type="html"><![CDATA[<div><p>There's nothing quite like the panic that sets in when your home internet connection goes down right before an important client call. After one too many of these moments, I decided it was time for a proper backup solution.</p><p>But being the engineer and self-hoster I am, I couldn't just buy something off the shelf. No, I needed to overcomplicate things with a DIY approach involving a Raspberry Pi, a USB 4G dongle, and a <em>few</em> hours of tinkering.</p><p>I've built this setup with a Unifi Security Gateway (USG), but the Raspberry Pi part would work with most routers that support a secondary WAN connection. Fair warning: what follows is decidedly not a polished product.</p><p>The goal: turn a Raspberry Pi into a mini-router that connects to your cellular network via a 4G USB dongle and presents itself as a standard ethernet WAN connection to your USG. When your primary internet fails, the USG automatically fails over to the Pi's cellular connection, and your work continues uninterrupted—albeit potentially more expensive if your cellular plan charges by the gigabyte.</p><p>I opted for TinyCore's piCore Linux for the Pi because it's very lightweight, boots fast, and, well, I wanted to take it for a spin (I'm all about those alternative ARM Linux distros).</p><p>The setup involves configuring multiple network interfaces: the USB dongle interface (usb0), the ethernet port (eth0) that connects to the USG, and optionally WiFi (wlan0) for out-of-band management. It's really not much more than a tidy bit of Linux networking and a sprinkling of udev rules.</p><p>Of course, no DIY project is complete without a maddening workaround. The ZTE MF823 dongle I'm using creates its own default subnet (192.168.0.1/24), and despite my best telnet-hacking attempts to change it, it stubbornly resets after each power cycle. So if your home network uses the common 192.168.0.0/24 subnet, you'll need to adjust your network, use a different dongle model entirely, or, like, go get a real failover product.</p><p>What about performance? The theoretical LTE speeds sound impressive, but the reality is that LTE through a USB port on a Raspberry Pi 3 is not. The Pi 3's ethernet and USB ports share the same USB 2.0 bus with a theoretical maximum of 480 Mbps, creating a bottleneck that limits <strong>real-world throughput to around 15-30 Mbps</strong>. But this project is about failover and continuity, not performance. Just having connectivity at all is the primary goal. Pulling your fat Docker containers or the latest JavaScript framework can wait until your primary connection is restored.</p><p>If you want to replicate this setup, I've included instructions below.</p><p><strong>Table of Contents</strong></p><ul><li><span><a href="#hardware">Hardware</a></span></li><li><span><a href="#software">Software</a></span></li><li><span><a href="#address-space">Address Space</a></span></li><li><span><a href="#pre-setup---test-lte-dongle-is-working">Pre-setup - Test LTE dongle is working</a></span></li><li><span><a href="#setup-of-pi">Setup of PI</a></span></li><li><span><a href="#unifi-usg-wan-failover-configuration">Unifi USG WAN failover configuration</a></span></li><li><span><a href="#resources">Resources</a></span></li></ul><h2 id="hardware">Hardware</h2><ul><li><span>Unifi USG</span></li><li><span>An extra switch (unmanaged, 100mb or 1gbit depending on if your PI supports gbit or not)</span></li><li><span>Raspberry Pi</span></li><li><span>ZTE MF823 LTE USB Dongle</span></li></ul><h2 id="software">Software</h2><ul><li><span>Unifi Controller</span></li><li><span>TinyCoreLinux PiCore <a href="http://tinycorelinux.net/ports.html">download</a> (v11 at time of writing)</span></li></ul><h2 id="address-space">Address Space</h2><ul><li><span><code>192.168.0.1/24</code> for <code>usb0</code> - this is the default subnet on the MF823, your home LAN mustn't overlap with this.</span></li><li><span><s><code>192.168.73.1/24</code> for <code>usb0</code> - we will change the MF823 to use this subnet</s> editing the dongle's settings doesn't stick across reboots</span></li><li><span><code>192.168.12.1/24</code> for <code>eth0</code> - for USG&lt;-&gt;PI</span></li></ul><h2 id="pre-setup---test-lte-dongle-is-working">Pre-setup - Test LTE dongle is working</h2><p>Flash PiCore onto SD card</p><p>Boot PiCore, plug into ethernet</p><pre><code class="language-">sudo /sbin/udhcp -v -i eth0 -x hostname:wan2 -p /var/run/udhcp.eth0.pid
ping 1.1.1.1
</code></pre><p>Insert dongle .. wait for it to settle ~60 seconds</p><p>Test</p><pre><code class="language-">lsusb
</code></pre><p>While the dongle is booting it will show a red light and you will see</p><pre><code class="language-">ID 19d2:1225
</code></pre><p>After it is ready the light will turn green and it will change to</p><pre><code class="language-">ID 19d2:1405
</code></pre><p>Load the kernel module</p><pre><code class="language-">modprobe cdc_ether
ifconfig -a
</code></pre><p>You should see <code>usb0</code></p><p>Setup usb0</p><pre><code class="language-">sudo /sbin/udhcp -v -i usb0 -x hostname:wan2 -p /var/run/udhcp.usb0.pid
ping 192.168.0.1
</code></pre><p>Use Socks proxy from your workstation to access MF823's webui</p><pre><code class="language-"># on workstation on same LAN as the pi
ssh -D 1337 tc@192.168.1.146
</code></pre><p>(192.168.1.146 was the ip on my local lan the pi got for eth0)</p><p>Use browser's socks settings to set socks 5 proxy 192.168.1.146 port 1337</p><p>Browse to 192.168.0.1 in browser, confirm the web ui is loading. If you have a sim card in, you should see LTE network connection status.</p><h2 id="setup-of-pi">Setup of PI</h2><p>Ok it's working. Time to set up the router on the pi.</p><p>First, let's set up wifi on the pi. <code>eth0</code> will become the LAN port for the pi router, but we need a headless/oob management channel, this will be over wifi.</p><p>SSH into the pi</p><p>Add ssh config and passwd to persistent config</p><pre><code class="language-">mkdir ~/.ssh
vi ~/.ssh/authorized_keys
# paste your ssh key
sudo echo '/usr/local/etc/ssh' &gt;&gt; /opt/.filetool.lst 
sudo echo '/etc/shadow' &gt;&gt; /opt/.filetool.lst
sudo echo '/home/tc/.ssh/' &gt;&gt; /opt/.filetool.lst
filetool.sh -b
</code></pre><p>Download the wifi extension and reboot to load the module</p><pre><code class="language-">tce-load -wi firmware-rpi-wifi.tcz
tce-load -wi wifi.tcz
sudo reboot
</code></pre><p>SSH again (using the key this time) and check for <code>wlan0</code></p><pre><code class="language-">iwconfig
</code></pre><p>Connect to your AP</p><pre><code class="language-">sudo /usr/local/bin/wifi.sh 
</code></pre><p>Check <code>wlan0</code> connection</p><pre><code class="language-">ifconfig wlan0
</code></pre><p>Configure auto wlan connect on system boot</p><pre><code class="language-">sudo echo '/usr/local/bin/wifi.sh -a 2&gt;&amp;1 &gt; /tmp/wifi.log' &gt;&gt; /opt/bootlocal.sh
filetool.sh -b
</code></pre><p>Reboot to test wlan0 auto config</p><pre><code class="language-">sudo reboot
</code></pre><p>Quickly SSH in over the <code>eth0</code> interface, get the <code>wlan0</code> ip address and then ssh back in over wifi.</p><p>From here on out I assume you are managing the pi over <code>wlan0</code>, as we will be making changes to <code>eth0</code>.</p><p>NOTE: The following section is included, but does not seem to actually work. Every time the dongle is rebooted, the settings are reverted.</p><blockquote><p>Next, let's change the subnet used by the ZTE MF823 router so it doesn't use the common <code>192.168.0.1</code> subnet.</p><p>Ssh into the Pi, then telnet into the router</p><pre><code class="language-">telnet 192.168.0.1
# user: root
# password: zte9x15
</code></pre><p>Edit the file at <code>/usr/zte/zte_conf/config/userseting_nvconfig.txt</code></p><p>Change the values:</p><pre><code class="language-">dhcpStart
dhcpEnd
lan_ipaddr
lan_ipaddr_for_current
</code></pre><p>I assume in the rest of this that you are using the subnet <code>192.168.73.0/24</code></p></blockquote><p>Actually, we continue with <code>192.168.0.1</code>, since the above does not stick.</p><p>Create <code>/opt/eth0.sh</code></p><pre><code class="language-">#!/bin/sh

sleep .5

sleep 1
if [ -f /var/run/udhcpc.eth0.pid ]; then
kill `cat /var/run/udhcpc.eth0.pid`
sleep 0.1
fi

ifconfig eth0 192.168.12.1 netmask 255.255.255.0 broadcast 192.168.12.255 up

sleep .1
sudo udhcpd /etc/eth0_udhcpd.conf &amp;
</code></pre><p>Make it executable</p><pre><code class="language-">chmod 775 /etc/eth0.sh
</code></pre><p>Create DHCP config for <code>eth0</code> in <code>/etc/eth0_udhcpd.conf</code></p><pre><code class="language-">start 192.168.12.100
end 192.168.12.200
interface eth0
option subnet 255.255.255.0
option router 192.168.12.1
option lease 43200
option dns 192.168.12.1
option domain wanfailover
</code></pre><p>Start and test dhcp server. You should see it listening on port udp 67.</p><pre><code class="language-">sudo udhcpd /etc/eth0_udhcpd.conf
ps -ef | grep udhcpd
sudo netstat -anp | grep udhcpd
</code></pre><p>Create init script to manage <code>usb0</code> in <code>/etc/init.d/dhcp-usb0.sh</code></p><p>Get file contents here <a href="./dhcp-usb0.sh"><code>dhcp-usb0.sh</code></a></p><p>Make it executable</p><pre><code class="language-">chmod 766 /etc/init.d/dhcp-usb0.sh
</code></pre><p>Create udev rule to auto connect to the <code>usb0</code> network in <code>/etc/udev/rules.d/15-zte-mf823.rules</code></p><pre><code class="language-">SUBSYSTEM=="usb", ATTR{idProduct}=="1405", ATTR{idVendor}=="19d2", RUN+="/etc/init.d/dhcp-usb0.sh restart"
</code></pre><p>Reload udev rules</p><pre><code class="language-">sudo udevadm control --reload-rules 
</code></pre><p>Unplug USB device, wait a few seconds, plug it back in. Check that <code>usb0</code> has an ip in the <code>192.168.0.0/24</code> subnet.</p><p>Persist the config</p><pre><code class="language-">sudo echo '/opt/eth0.sh' &gt;&gt; /opt/.filetool.lst
sudo echo '/etc/eth0_udhcpd.conf' &gt;&gt; /opt/.filetool.lst
sudo echo '/etc/init.d/dhcp-usb0.sh' &gt;&gt; /opt/.filetool.lst
sudo echo '/etc/udev/rules.d/15-zte-mf823.rules' &gt;&gt; /opt/.filetool.lst
sudo echo '/opt/eth0.sh &amp;' &gt;&gt; /opt/bootlocal.sh
filetool.sh -b 
</code></pre><p>Reboot to test. You should see <code>eth0</code> with an ip address of <code>192.168.12.1</code>, and <code>usb0</code> should be configured.</p><p>Enable ipv4 forwarding</p><pre><code class="language-">sudo sysctl -w net.ipv4.ip_forward=1
sudo echo 'sysctl -w net.ipv4.ip_forward=1' &gt;&gt; /opt/bootlocal.sh
filetool.sh -b
</code></pre><p>Install dnsmasq and iptables</p><pre><code class="language-">tce-load -wi dnsmasq
tce-load -wi iptables
</code></pre><p>Enable NAT</p><pre><code class="language-">sudo iptables -t nat -A POSTROUTING -o usb0 -j MASQUERADE
</code></pre><p>Make it persistent</p><pre><code class="language-">sudo echo 'iptables -t nat -A POSTROUTING -o usb0 -j MASQUERADE' &gt;&gt; /opt/bootlocal.sh
sudo echo 'dnsmasq' &gt;&gt; /opt/bootlocal.sh
</code></pre><p>Finally, add a little script to remove the wifi default gateway. Without this, the wifi script will take over the default gateway. The actual default gateway is set by our usb udev script.</p><p><code>/opt/fix-gw.sh</code></p><pre><code class="language-">#!/bin/sh

gw=$(route -n|grep "UG"|grep -v "UGH"|cut -f 10 -d " ")

if [ ! -z "$gw" ]; then
  route del default gw "$gw"
fi
</code></pre><p>Save it</p><pre><code class="language-">chmod 777 /opt/fix-gw.sh
sudo echo '/opt/fix-gw.sh' &gt;&gt; /opt/.filetool.lst
sudo echo '/opt/fix-gw.sh' &gt;&gt; /opt/bootlocal.sh
filetool.sh -b
</code></pre><p>Do a final reboot. Connect your pi to an empty switch, connect your laptop to the switch. You should have internet via the LTE dongle, verify with</p><pre><code class="language-">curl https://iconfig.co/json | jq
</code></pre><p>You should see your LTE provider's info.</p><h2 id="unifi-usg-wan-failover-configuration">Unifi USG WAN failover configuration</h2><p>This is a small cluster-f*** depending on your Controller version and whether you have the old or new settings interface. In short, any docs you read about setting the "Port Remapping" feature are out of date since at least 2019.</p><p>You must not be using the WAN2 port for LAN traffic.</p><p>Assuming the old, non beta (as of Oct 2020) settings UI you can follow what is below.</p><p>Are you from the future where the new beta UI is no longer beta, and the old UI is gone? Good luck.</p><ol start="1"><li><p>Create a WAN2 network</p><pre><code class="language-">Settings -&gt; Networks -&gt; [ + Create New Network ]
Purpose: WAN
Network Group: WAN2
Load Balancing: dropdown, choose
    "Failover Only" to use the WAN2 port only if WAN has failed
</code></pre></li><li><p>Assign the USG's port to the WAN2 network</p><pre><code class="language-">Devices -&gt; USG -&gt; Ports tab -&gt; [ Configure interfaces ]

Port WAN2/LAN2 Network: WAN2

Apply
</code></pre></li></ol><p>Wait for the USG to re-provision.</p><p>Test it by sshing into the USG and execute:</p><pre><code class="language-">ip addr # check eth2
show load-balance status
show load-balance watchdog
</code></pre><p>That's it. Unplug your WAN1, watch it failover to WAN2. Plug WAN1 back in and see WAN2 recover.</p><p>In case you're wondering: you do get email alerts and alerts in the Controller UI whenever a WAN transition happens.</p><h2 id="resources">Resources</h2><ul><li><span><a href="https://web.archive.org/web/20201025001315/https://www.development-cycle.com/2017/04/27/zte-mf823-inside/">Hacking MF 823's UI</a></span></li><li><span><a href="https://web.archive.org/web/20200810195910/https://wiki.archlinux.org/index.php/ZTE_MF_823_(Megafon_M100-3)_4G_Modem">ArchLinux Modem Device info</a></span></li><li><span><a href="https://web.archive.org/web/20200526091424/https://iotbytes.wordpress.com/configure-microcore-tiny-linux-as-router/">Tinycore ip router setup</a></span></li></ul></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Frpi-usg-4g-failover%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Frpi-usg-4g-failover%2F">Bluesky</a></p>]]></content></entry><entry><title>Kubernetes for SMBs: When a Scooter Beats a Battleship</title><link href="https://casey.link/blog/kubernetes-for-smbs/" rel="alternate" type="text/html" /><id>tag:casey.link,2018-08-15:/blog/kubernetes-for-smbs/</id><published>2018-08-15T00:00:00Z</published><updated>2025-06-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Say it with me: I will not have Google-scale problems. I have customer-scale problems.</summary><content type="html"><![CDATA[<div><aside class="not-prose py-2 px-4 mb-0 border-l-4 border-blue-500"><h2 class="font-bold mb-2 text-blue-500"><svg class="w-5 h-5 inline-block mr-2 fill-current stroke-current " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" role="img"><path fill="none" d="M0 0h256v256H0z"></path><circle cx="124" cy="84" r="16"></circle><circle cx="128" cy="128" r="96" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="24"></circle><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="24" d="M120 124a8 8 0 0 1 8 8v36a8 8 0 0 0 8 8"></path></svg>Still relevant</h2><p><p>This article is over 5 years old, but oh boy, it is as relevant today as it was in 2018. Check the end for a short update.</p><p>-Casey in 2025</p></p></aside><p>Everyone's talking about Kubernetes. At every conference, in every DevOps Slack channel, in the Orange Pages, the message is clear: if you're not running Kubernetes, you're doing containers wrong. Well, I'm here to tell you that for most small and medium businesses, Kubernetes is like using a battleship for your daily commute.</p><p>Don't get me wrong, containers are fantastic. They've revolutionized how we deploy software (especially for runtimes like ruby, node, and python, less for the jar/war community). But somewhere along the way, we confused "using containers" with "needing Google-scale orchestration."</p><div class="sidenote-container"><p>Say it with me: <strong>I will not have Google-scale problems. I have customer-scale problems. <a id="fn1" class="sidenote-ref" href="#fnref1" role="doc-noteref"><sup data-label="f1">1</sup></a></strong></p><div class="sidenote-column"><span id="fnref1" class="sidenote" role="doc-footnote"><sup class="sidenote-number">1.</sup>...and if you do, then you will have enough money to migrate to kubernetes.<a class="text-inherit" role="doc-backlink" href="#fn1"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><p>I consult with SMBs and nonprofits on their technical infrastructure. Here's what their container deployments actually look like:</p><ul><li><span>5 to 20 containers running their core applications.</span></li><li><span>A &lt;your favorite&gt;SQL database.</span></li><li><span>Maybe Redis for caching.</span></li><li><span>They don't have a dedicated ops team. They have a Sarah who knows Linux and Docker pretty well and Jim who's really good with databases.</span></li></ul><p>That's it.</p><p>But then they give into the FOMO (after all they don't want to be the dinosaur still using "just Docker"). Now here's what happens when they deploy Kubernetes: suddenly they're running 20+ containers just for the infrastructure! The Kubernetes control plane itself, ingress controllers, storage plugins, network policies, monitoring sidecars... the service mesh <em>shudder</em>. When your infrastructure containers outnumber your actual workloads by 2-to-1, you've got to ask yourself: what problem are we solving here?</p><p>The teams I work with typically have 0-5 developers who also handle operations in devops style. They don't have dedicated SREs. They push code during business hours and schedule maintenance windows for updates. Their uptime requirements? "Don't break during the workday, and please let us sleep through the night."</p><p>These aren't the problems Kubernetes was designed to solve. Kubernetes solves Google problems: thousands of services, millions of requests, teams distributed across the globe. Most businesses, even successful and growing ones, will never have Google problems.</p><h2 id="the-hidden-tax-of-complexity">The Hidden Tax of Complexity</h2><p>Matt Rogish from ReactiveOps recently argued that <a href="https://web.archive.org/web/20200507004329/https://www.fairwinds.com/blog/is-kubernetes-overkill">"Kubernetes has low accidental complexity and high essential complexity"</a>. I appreciate this argument, in that <a href="https://www.infoq.com/presentations/Simple-Made-Easy/">simple doesn't mean easy</a>, but the "essential complexity" of Kubernetes is still astronomical for most organizations. Essential complexity isn't inherently virtuous. It's only valuable when it maps to essential problems you actually have. He dismisses CTOs who say "I just have a Rails application and plain old EC2 VMs will give me what I need" as being short-sighted.</p><p>The "essential complexity" of Kubernetes includes:</p><ul><li><span>Understanding pods, deployments, services, and ingresses</span></li><li><span>Grokking the networking model (ClusterIP vs NodePort vs LoadBalancer)</span></li><li><span>Learning YAML templating with Helm or Kustomize (and the absolute mess that managing that is)</span></li><li><span>Debugging why your pod is stuck in CrashLoopBackOff</span></li><li><span>Figuring out why your PersistentVolumeClaim won't bind</span></li><li><span>Understanding RBAC and service accounts</span></li><li><span>Keeping up with the regular Kubernetes releases and deprecations</span></li></ul><p>This isn't accidental complexity. These are fundamental concepts in the Kubernetes model. But for a team running 10 containers, this essential complexity is solving problems they don't have while creating new ones they didn't ask for.</p><p>The real cost is opportunity cost.</p><p>While you're wrestling with pod network policies and figuring out why your persistent volumes won't mount, your competitors are shipping features using boring, simple technology that just works.</p><h2 id="a-battleship-is-still-a-battleship-even-if-you-are-renting-it">A battleship is still a battleship even if you are renting it</h2><p>I can see the k8s proponotes frothing at the mouth now, "what about managed Kubernetes?" In the last year Google's launched GKE for a while, Azure AKS last year, and this year AWS has launched EKS. True, these services remove some operational burden. You don't have to manage the control plane or worry about etcd backups. But here's the thing: even with managed Kubernetes, you're still on the hook for a lot.</p><p>You still need to understand pods, services, deployments, ingress controllers, persistent volumes, and the rest of the Kubernetes abstraction layer. When your app won't deploy, the error messages assume you speak fluent Kubernetes. When performance tanks, you're debugging through layers of network policies and service meshes. You're still managing node pools, scaling policies, resource quotas, network policies, and RBAC configurations. You're still debugging why your pods are stuck in "ImagePullBackOff" or why your PersistentVolumeClaim won't bind. Managed or not, you still need to know what a CNI plugin is and why yours is misbehaving.</p><p>Rogish admits that most companies don't need Kubernetes for scaling, the one thing it's probably great at.</p><p>Rogish's argument is that EC2 instances can't automatically restart crashed applications. "Apps running on regular EC2 instances have no automatic restart if your Rails application runs out of memory," he writes.</p><div class="sidenote-container"><p>Congratulations, you've reinvented Linux the operating system and systemd but with a thousand more moving parts, and distributed to boot. A simple <code>Restart=always</code> in your systemd service file solves the restart problem. Or even AWS Auto Scaling Groups (which I am always hesitant to recommend, but for ensuring a certain number of instances is running, is pretty for purpose)? <a id="fn2" class="sidenote-ref" href="#fnref2" role="doc-noteref"><sup data-label="f2">2</sup></a> Kubernetes is not a magic out-of-memory-begone! artifact by any means, your nodes can run out of memory just like your single EC2 instance can, and your pods will being going up-and-down as your k8s scheduler thrashes around.</p><div class="sidenote-column"><span id="fnref2" class="sidenote" role="doc-footnote"><sup class="sidenote-number">2.</sup>And speaking of memory management, there's exciting work happening in userspace OOM handling right now. Facebook's been developing sophisticated out-of-memory daemons that can proactively manage memory pressure before your app even crashes. We're moving beyond the kernel's reactive OOM killer to intelligent userspace solutions.<a class="text-inherit" role="doc-backlink" href="#fn2"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><h2 id="simple-alternatives-that-actually-work">Simple Alternatives That Actually Work</h2><p>Here's the thing: you probably already have everything you need. Docker plus systemd can handle most single-host deployments beautifully. Write a systemd unit file, enable it, and you're done. Need to update? <code>docker pull</code>, <code>systemd restart</code>. It's so simple it feels like cheating.</p><p>For multi-container applications, docker-compose gives you 80% of what Kubernetes offers with about 10% of the complexity. Define your services in a YAML file, run <code>docker-compose up</code>, and watch your stack come alive. Need horizontal scaling? Run haproxy or nginx on a box as a load balancer, run several other boxes running Docker. Your favorite monitoring package. It's not fancy, but it works.</p><p>And sometimes, a bash script that pulls a new container and restarts it is all you need. I've seen this "architecture" quietly mint millions in revenue without breaking a sweat. It's understandable, debuggable, and maintainable by anyone who knows basic Linux.</p><h3 id="the-boring-path-forward">The Boring Path Forward</h3><p><a href="https://web.archive.org/web/20180806233940/https://mcfunley.com/choose-boring-technology">Dan McKinley's "Choose Boring Technology"</a> remains my north star for tech-stack and infrastructure decisions. Every company gets about three innovation tokens. Spend them wisely. If your core business is e-commerce, why spend an innovation token on orchestration? Use boring tools for infrastructure so you can be innovative where it matters: your actual product.</p><p>Start with the simplest thing that could possibly work. Measure actual pain points before adding complexity. Feel the pain of manual deployments before automating. Hit scaling limits before building for infinite scale. You might be surprised how far simple solutions can take you.</p><p>If you eventually need Kubernetes, it'll be obvious. You'll have specific problems that simpler tools can't solve. Your team will have grown. You'll have budget for dedicated operations. When that day comes, yea sure, start with a managed service like GKE or AKS. The migration will make sense because it solves real problems, not theoretical ones.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fkubernetes-for-smbs%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fkubernetes-for-smbs%2F">Bluesky</a></p>]]></content></entry><entry><title>The Reality Gap: Why Human Rights Technology Often Fails on the Ground</title><link href="https://casey.link/blog/reality-gap-human-rights-tech/" rel="alternate" type="text/html" /><id>tag:casey.link,2018-05-28:/blog/reality-gap-human-rights-tech/</id><published>2018-05-28T00:00:00Z</published><updated>2018-05-28T00:00:00Z</updated><author><name>Casey Link</name></author><summary>In the aftermath of the Arab Spring, I consulted on a project developing a mobile application for citizen journalists.</summary><content type="html"><![CDATA[<div><p>In the aftermath of the Arab Spring, I consulted on a project developing a mobile application for citizen journalists. The app was technically sophisticated. It helped people record and edit video on their smartphones. It had built-in lessons with training on framing, narrative structure, and journalism topics like citing and protecting sources. The Valley-based developers were excited about "empowering everyday people" to document historical events with journalistic rigor. Keep in mind this was before the rise of TikTok and YouTube Shorts. It was back when traditional media was used and respected.</p><p>In principle, it worked perfectly. In practice? The app failed.</p><p>It generated high-quality video files ranging from 500MB to 3GB per story package—impossible to transmit over the congested 3G networks and intermittent connections available in regions experiencing political upheaval. And that's before considering the prohibitive mobile bandwidth costs. Citizen journalists reverted to posting short, unedited clips directly to social media platforms that compressed the videos. The app's sophisticated storytelling features went unused, while important documentation happened through the simplest channels that could actually handle infrastructure bottlenecks.</p><p>This wasn't a one-off failure. In human rights tech, this sort of failure is common.</p><h2 id="the-great-disconnect">The Great Disconnect</h2><p>A vast reality gap separates the people who build human rights technology from those who need to use it. We've created a situation where tools are designed in Silicon Valley, London, or Berlin for use in entirely different contexts; where infrastructure, digital literacy, and everyday realities are vastly different.</p><p>While human rights defenders prioritize simplicity, familiarity, and reliable offline functionality, developers often focus on advanced features, perfect security models, and sophisticated workflows that assume stable infrastructure.</p><p>What we're left with is a graveyard of well-intentioned technologies that look impressive in demos but fail in the field.</p><p>In my decade of consulting with human rights organizations, I've seen this story play out repeatedly. Applications require constant internet connectivity in regions where connections are spotty at best. Tools generate large files without considering how users will transmit or store them. Software gets designed for high-end devices in contexts where users have older phones with limited processing power. Security updates require bandwidth many users simply can't access.</p><p>Some years ago, I worked with a cocoa importer who wanted to improve their sustainability monitoring across their West African supply chain. Their team understood the reality on the ground. Mobile signals in rural areas were either non-existent or prohibitively expensive. Instead of forcing a digital-first approach, their field inspectors gathered information using paper forms. The software solution processed these paper forms once inspectors returned to their offices,  automatically extracting and aggregating sustainability metrics from the scanned documents. The system generated the quantitative data they needed for annual reports and year-over-year tracking.</p><p>The inspectors could focus on their work in the field without worrying about connectivity, while the importer got the digital sustainability data they needed. Not flashy, but it worked because it respected local constraints from the start</p><p>What happens when these tools fail? Human rights defenders don't just abandon their work, <a href="/blog/security-paradox-human-rights-tech/">they find creative workarounds</a>. Activist collectives use shared email accounts as makeshift archives. WhatsApp groups become documentation databases (yes, <em>sigh</em>)</p><p>They're smart, they're adaptable! When your solution fails them (because you didn't understand their local constraints), they come up with a better one.</p><p>Better, because it actually works.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Freality-gap-human-rights-tech%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Freality-gap-human-rights-tech%2F">Bluesky</a></p>]]></content></entry><entry><title>The Security Paradox: When Good Advice Makes Human Rights Work Less Secure</title><link href="https://casey.link/blog/security-paradox-human-rights-tech/" rel="alternate" type="text/html" /><id>tag:casey.link,2017-02-01:/blog/security-paradox-human-rights-tech/</id><published>2017-02-01T00:00:00Z</published><updated>2017-02-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>How well-intentioned security recommendations can create dangerous vulnerabilities for human rights defenders working in challenging contexts.</summary><content type="html"><![CDATA[<div><div class="sidenote-container"><p>Picture this (real!) example from the <a href="https://web.archive.org/web/20161210151707/https://www.theengineroom.org/wp-content/uploads/2016/12/technology-tools-in-human-rights.pdf">"Technology Tools in Human Rights" study</a> (PDF) <a id="fn1" class="sidenote-ref" href="#fnref1" role="doc-noteref"><sup data-label="f1">1</sup></a></p><div class="sidenote-column"><span id="fnref1" class="sidenote" role="doc-footnote"><sup class="sidenote-number">1.</sup>conducted by The Engine Room and funded by the Oak Foundation in 2016.<a class="text-inherit" role="doc-backlink" href="#fn1"><svg class="size-4 inline ml-1 text-inherit border-b " xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256" aria-hidden="true" focusable="false" role="img"><path fill="none" d="M0 0h256v256H0z"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 136 32 88l48-48"></path><path fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16" d="M80 200h88a56 56 0 0 0 56-56h0a56 56 0 0 0-56-56H32"></path></svg><span class="sr-only">Back to reference</span></a></span></div></div><p>An NGO documenting human rights abuses was encouraged by security experts to switch from Windows to a free software operating system. The reasoning was sound - better security, less surveillance, more control. But there was a  problem.</p><p>Their printer wouldn't work with the new system.</p><p>Staff started saving sensitive documents to USB sticks and printing them at local internet cafes. The USB sticks went missing, exposing far more sensitive data than the theoretical risks they'd been trying to avoid.</p><p>I call this the "security paradox" - when well-intentioned security advice actually creates new, often more dangerous vulnerabilities.</p><h2 id="why-does-this-keep-happening?">Why Does This Keep Happening?</h2><p>The root cause isn't complicated: there's a massive disconnect between security experts (often in Europe or the US) and the daily realities of human rights defenders working in challenging contexts.</p><p>Valeria Umaña, who works with groups in Nicaragua, puts it bluntly in the Technology Tools in Human Rights report:</p><blockquote><p>"For people in the countryside, the more apps they have, the more problems they can have because they often don't know how to use them."</p></blockquote><p>The pursuit of perfect security ignores imperfect realities, and actively endangers the people it's meant to protect.</p><p>When security recommendations fail, vulnerable populations bear the risk. It's not the security consultant who faces danger when documentation leaks - it's the victims and witnesses who provided testimony.</p><p>Trust evaporates after a security recommendation backfires, making organizations resistant to all security advice, even the good stuff.</p><p>For chronically underfunded human rights organizations, investing precious time and money into systems that ultimately fail is devastating.</p><p>I've witnessed organizations abandon critical documentation work altogether after particularly traumatic security failures. That's not just a technical problem - it's a human rights catastrophe.</p><h2 id="a-better-way-forward">A Better Way Forward</h2><p>The good news? There are approaches that actually work. The <a href="https://web.archive.org/web/20170820104031/https://library.theengineroom.org/humanrights-tech/#conclusion">conclusion</a> of the Engine Room study has some stellar advice.</p><p>I also would add these three additional techniques I have found useful:</p><ol start="1"><li><p><strong>Start with risk assessment, not tool selection</strong>. Understand the specific threats an organization faces before recommending solutions. A women's collective documenting domestic violence faces different risks than journalists exposing government corruption.</p></li><li><p><strong>Value simplicity and familiarity</strong>. Sometimes a slightly less secure tool that people will actually use correctly is better than a theoretically secure one they'll work around. As Rory Byrne says in the report, "People like to stick with what they know."</p></li><li><p><strong>Implement changes incrementally</strong>. Rather than radical overhauls, focus on gradual improvements with continuous feedback. When I work with human rights groups, we start with one small change, perfect it, then move to the next.</p></li></ol><h2 id="the-reality-check-we-need">The Reality Check We Need</h2><p>I believe deeply in the right to privacy and the importance of security for human rights work. But I've learned to be humble about technology solutions. The most secure system isn't the one that looks best on paper, it's the one that actually protects people in real-world conditions.</p><p>Technology doesn't make change. People make change. They need security approaches that recognize their humanity, constraints, and contexts.</p><p>So the next time you receive security advice, remember the NGO that had printer trouble and ended up with USB sticks in internet cafes. They weren't careless - they were trying to be more secure.</p><p>Don't let the latest security solution become the next problem.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fsecurity-paradox-human-rights-tech%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fsecurity-paradox-human-rights-tech%2F">Bluesky</a></p>]]></content></entry><entry><title>A Reluctant Relationship: Yubikey and Google Authentication</title><link href="https://casey.link/blog/yubikey-google-auth/" rel="alternate" type="text/html" /><id>tag:casey.link,2011-12-13:/blog/yubikey-google-auth/</id><published>2011-12-13T00:00:00Z</published><updated>2025-06-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Learn how to use a Yubikey instead of a smartphone for Google's 2-factor authentication.</summary><content type="html"><![CDATA[<div><p><em>Want the yubikey+google 2-factor authentication solution?</em> <a href="#goodstuff">Skip to the good stuff</a>.</p><p>Passwords rule our lives on the Internet; they are the foundation of identity management. When a website or service wants to know who you are, you prove you are you with your username and password. However,  passwords aren't the fundamental piece of this identity management system.</p><p>What happens when you lose your password? We've all been through this jig before. The website usually sends an email to you with a link or instructions on how to reset your password. By proving you have access to your email, you are effectively proving you are you.</p><p>Then, your email account is the lowest common denominator; with access to your email account (nearly) all your other accounts can be accessed. If you use a completely unique passwords for each service (and store them in a password manager like <a href="http://keepass.info">Keepass</a> or <a href="http://lastpass.com">Lastpass</a>), then access to your email account is even more attractive. Therefore, the importance of securing your email account cannot be overstated.</p><p>When it comes to securing your email <a href="http://googleblog.blogspot.com/2011/02/advanced-sign-in-security-for-your.html">Google's 2-factor authentication</a> is pretty awesome. Even though there are still <a href="http://tech.kateva.org/2011/07/massive-security-hole-in-google-two.html">some important flaws</a> one should be aware of, it can significantly increase the security of your Google or Google Apps account.</p><p>For me there is one major drawback to Google's 2-factor offering: it requires a cellphone to be useful. This is a drawback for several reasons.</p><p>First, your smartphone isn't as secure as we would like. Mobile malware is on the rise--<a href="http://www.schneier.com/blog/archives/2011/11/android_malware.html">particularly</a> if you have an Android phone--and if I was a malware writer I would be targeting 2-factor authentication apps like Google's.</p><p>The second drawback most people won't identify with: I don't want to carry a smartphone with me! I travel. A lot. In fact, <a href="http://elusivetruth.net">I travel by bike</a>. When traveling by bike, minimizing weight is important followed closely by minimizing the value of my equipment (shiny stuff <a href="https://twitter.com/#!/Ramblurr/status/144521420762918914">gets broken</a> or stolen), and smartphones are heavy and expensive. So, I carry a tiny and cheap $30 cellphone and swap sim cards as I enter new countries.</p><p>How then can I retain the benefit of Google's 2-factor authentication, while ditching the phone? I could generate OTPs through Google and write them down, which I have been doing for awhile, but that is a huge PITA.</p><p>Enter the <a href="http://yubico.com/yubikey">Yubikey</a>. The Yubikey is a tiny usb device that produces One-Time Passwords and appears to the OS as a USB keyboard making it work on all platforms. The Yubikey can hold two identities that can be configured according to four different options (Yubico OTP, OATH, static, challenge-response).</p><p>A Yubikey seems like the perfect lightweight, secure replacement for OTP generation, how then could I use it with Google's 2-factor authentication?</p><p><a name="goodstuff"></a></p><h3 id="finding-common-ground:-oath-totp">Finding common ground: OATH-TOTP</h3><p>Originally, I imagined some system that stored your Google 2-factor auth secret, and allowed you to auth with your Yubico OTP. Such a system would not be ideal, because wherever that system lived so would live your google secret. We want to be sure wherever the secret is stored is secure.</p><p>As it turns out <a href="http://code.google.com/p/google-authenticator/">Google's 2-factor authenticator</a> is an implementation of the OATH-TOTP protocol, a system for generating one-time passwords via a HMAC-SHA1 hash using the current time as input.</p><p>The Yubikey also happens to support the OATH-HOTP protocol (of which TOTP is a variant), so we should be able to configure the Yubikey to generate OATH-HOTP OTPs somehow. Unfortunately, the Yubikey is battery-less, so it is unable to store the current time.</p><p>All is not lost, for the Yubikey's fourth configuration option is a <em>challenge-response</em> configuration. This allows a client-side application to send a challenge to the Yubikey, which the Yubikey uses as input to generate a HMAC-SHA1 hash that becomes the response. This is exactly the cryptographic hashed used by OATH-TOTP and hence Google's 2-factor auth.</p><p>Around the same time I figured all this out, Yubico <a href="http://yubico.com/totp">posted</a> the same explanation I just gave, along with a Windows client-side application that used the challenge-response method described to enable Google authentication with a Yubikey. Huzzah!</p><h3 id="yubitotp-for-linux">YubiTOTP for Linux</h3><p>It took awhile, but a <a href="http://mutantmonkey.in/">friend</a> and I eventually got around to implementing a similar client-side helper application for Linux.</p><p>The implementation is fairly simple (if not pretty). A challenge is generated based on the current time, sent to the yubikey using the <em>ykchalresp</em> utility, and then the HMAC-SHA1 hash is mangled according to the HOTP specification to produce a 6 digit code.</p><p>Before you can use the tool, you must configure your Yubikey, but then generating OTPs from your Yubikey is as simple as: <code>$ ./yubi_goog.py</code>.</p><p>The tool can also be used to generate OTPs without a Yubikey (using the <em>--generate</em> flag), but you must enter the secret on every invocation.</p><p><strong>Grab the tool and instructions over at the <a href="https://github.com/Ramblurr/yubi-goog">github repo</a>.</strong></p><h3 id="not-perfect">Not Perfect</h3><p>We now have a way to generate OTPs for Google's 2-factor authentication without a phone; however, this isn't the perfect solution. Generating a TOTP requires the current time, so the Yubikey must be told the current time which stipulates the use of a client-side helper application.</p><p>So, using this method you can only use your Yubikey where you are able to run my helper app (or the windows version from Yubico). I often find myself in Internet cafes or other public terminals, where running a python script isn't feasible.</p><p>As of yet I do not have a working solution. It would be fantastic if Google would natively support the Yubikey, but in the meantime we'll have to be satsfied with innovative hacks.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fyubikey-google-auth%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fyubikey-google-auth%2F">Bluesky</a></p>]]></content></entry><entry><title>PBM is dead! Long live PBM!</title><link href="https://casey.link/blog/pbm-is-dead/" rel="alternate" type="text/html" /><id>tag:casey.link,2011-03-12:/blog/pbm-is-dead/</id><published>2011-03-12T00:00:00Z</published><updated>2025-06-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>PBM gaming created experiences of depth and anticipation that modern games can't match—and there's still a place for this imaginative style in our fast-paced world.</summary><content type="html"><![CDATA[<div><figure class="image"><img src="mailboxes1-500.webp" alt="Picture of old mailboxes"><figcaption class="text-center mt-1">"Photo (C) silverlunace. CC licensed."</figcaption></figure><blockquote><p>Play-by-mail gaming</p></blockquote><p>Assuming you know what it is, I suspect that phrase produces two different reactions in the minds of those who read it. Either it conjures up fond memories of a special era in gaming, an era in which you spent hours with piles of papers haphazardly spread about, lost in the universe of your imagination, and then hours more in anticipation waiting for that darned post man to arrive at your mailbox with the results of your dastardly scheming or the outcome of a space battle of truly epic proportions; or that phrase incites an altogether different thought, one of a bygone era of rotary telephones or a hand written letters, which is to say thoughts of the recently antiquated.</p><figure class="image"><img src="rotary-dial-200.webp" alt="Picture of an old rotary telephone dial"><figcaption class="text-center mt-1">"Photo (C) R Sull. CC licensed."</figcaption></figure><p>The postal system as a medium for gaming has certainly declined in recent decades, and unfortunately the number of games in the style that PBM (play-by-mail) promoted has declined as well. In the 90's e-mail developed, and many PBMs became PBEMs (play-by-email) with much success; however the transition to the Internet did not fair as well.  The way most people use and view the Internet is directly contradictory to the important core mechanic of <em>anticipation</em> (i.e., lack of instant feedback). After all, the Internet enabled real-time mechanics that were previously impossible (not to mention the advances in computers that allow for 3d graphics).</p><p>This PBM style can be generally summed up in this laundry list:</p><ul><li><span><strong>Asynchronous</strong>, <strong>turn based play</strong> -- players didn't have to gather and play simultaneously</span></li><li><span><strong>Multiplayer</strong> -- player interaction was a driving force</span></li><li><span><strong>Massive scale</strong> -- a position consisted of running an empire or a a group of characters</span></li><li><span><strong>Depth</strong> -- large number of possible player actions</span></li><li><span>Use of <strong>anticipation</strong> and <strong>suspense</strong> to keep players interested</span></li><li><span><strong><a href="http://playbymail.net/mybb/showthread.php?tid=34">Imagination</a></strong> based -- as opposed to graphic visuals</span></li></ul><p>Imagination has been replaced with intense 3D visuals. The feeling of anticipation and suspense was replaced by instant gratification made possible by real-time feedback systems. Turn based, asynchronous multiplayer was replaced by real-time synchronous multiplayer. As a result of these replacements, the depth and scale of modern games has suffered. For example, in a real-time strategy game you can't have both massive scale and extreme depth, unless you want games to last for days, a big drawback for multiplayer games. These supplanted qualities thrived in a postal medium, where long and careful planning was favored over instant feedback.</p><style>
div#CAD pre {
border: none;
padding: 1.5em;
margin: auto;
width: fit-content;
}
</style><div id="CAD" role="img" aria-label="ASCII art logo for Conquest and Destiny, a 90s PBM game. The logo shows the letters 'C' and 'D' stylized within a rectangular border.">
<pre aria-hidden="true">
 ****************************
 |                          |
 |        _       __        |
 |       / \     |  \       |
 |      /   \    |   \      |
 |     /         |    \     |
 |    /          |     \    |
 |   /           |      \   |
 |  (ONQUEST and |ESTINY )  |
 |   \           |      /   |
 |    \          |     /    |
 |     \         |    /     |
 |      \   /    |   /      |
 |       \_/     |__/       |
 |                          |
 |                          |
 ****************************
</pre>
</div><p>Many people don't fully understand the depth and scale of these old PBM games. These games were not limited to your familiar chess, scrabble, and diplomacy games. They were the first true massively multiplayer games. For example, one such game was <a href="http://binaryelysium.com/pbm/conquest_and_destiny/">Conquest and Destiny</a>, an open-ended, civilization building, role playing game that ran in the early 90s in which every player commanded his/her own race of custom designed beings. The game galaxy had over 7 million stars and planets to explore, colonize, and conquer. While it boasted players in the hundreds or thousands as opposed to <a href="http://www.eveonline.com/news.asp?a=single&amp;nid=3044&amp;tid=1">EVE Online's 300,000+</a> or <a href="http://www.gamasutra.com/php-bin/news_index.php?story=17062">World of Warcraft's 10 million+</a>, it was definitely a massive game for its time. Conquest and Destiny lived and died before the Internet, consequently all we have left from this massive text-based universe is a <a href="http://binaryelysium.com/pbm/conquest_and_destiny/">rulebook and advertisement</a> rescued from the bowels of USENET. Undoubtedly there were many fascinating inter-player narratives of the kind EVE is famous for, but we'll never read them.</p><figure class="image"><img src="hyb_war_logo.webp" alt="Hyborian War logo"><figcaption class="text-center mt-1">Hyborian War Logo (C) Reality Simulations, Inc.</figcaption></figure><p>Another example of a massive PBM is <a href="http://reality.com/hwpcont.htm">Hyborian War</a>, a game of imperial conquest in the age of Conan. It began in the 1980s and is still running strong today! Check out an <a href="http://grimfinger.net/HWKingdomReports/AquiloniaKingdomReport.pdf">example turn report</a> for a Hyborian War position, courtesy of GrimFinger's <a href="http://grimfinger.net/HyborianWar.html">Hyborian War site</a>. There were hundreds of these games once upon a time (a by all means non-exhaustive list can be found <a href="http://playbymail.net/mybb/showthread.php?tid=2">here</a>).</p><p>Internet powered real-time, graphical games are not the logical conclusion of modern gaming. That is, I do not believe real-time is better than turn-based simply because it is newer, for proof of this look no further than the wildly successful Civilization franchise.</p><p>Bringing PBM into the second decade of the 21st century faces a major hurdle: the modern generation of gamers might not have the patience or interest in playing a game where feedback between turns is measured in days or more. I suspect this isn't an immutable fact, rather gamers simply need to be (re) introduced to the style and their imaginations reactivated.</p><p>The turn-based, long-term, and imaginative play-style is certainly still possible, and in today's fast-paced media intensive life it might be a welcome respite to many gamers.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fpbm-is-dead%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fpbm-is-dead%2F">Bluesky</a></p>]]></content></entry><entry><title>Programming as Modern Art</title><link href="https://casey.link/blog/programming-as-modern-art/" rel="alternate" type="text/html" /><id>tag:casey.link,2010-11-06:/blog/programming-as-modern-art/</id><published>2010-11-06T00:00:00Z</published><updated>2025-06-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Piet is a programming language where code looks like abstract art. Frustrated by the lack of tools, I built a graphical IDE to make coding in colors actually usable.</summary><content type="html"><![CDATA[<div><p>As a coder I'm always on the lookout for interesting technologies to complement my toolset. This is not a post about one of those technologies.</p><p>There is a class of programming languages classified as '<em>esoteric</em>'. While programming languages designed for production use concern themselves with practical features such as readability, performance, and flexibility, esoteric languages explore the boundaries of language design with rampant disregard for all the aforementioned features. A few famous examples of esoteric programming languages (if it even makes sense to refer to an esoteric something as famous) are <a href="http://www.muppetlabs.com/~breadbox/bf/">Brainfuck</a>, <a href="http://lolcode.com/">LOLCODE</a>, and <a href="http://en.wikipedia.org/wiki/Malbolge">Malbolge</a>.</p><p>In this post I want to talk about an esoteric language that attempts to breakout of a mold every other programming language has taken for granted. This language is <a href="http://www.dangermouse.net/esoteric/piet.html">Piet</a> and was created by <a href="http://www.dangermouse.net/">David Morgan-Mar</a>.</p><p>From <a href="http://www.dangermouse.net/esoteric/piet.html">Piet's specification</a>:</p><blockquote><p>Piet is a programming language in which programs look like abstract paintings. The language is named after Piet Mondrian, who pioneered the field of geometric abstract art.</p></blockquote><figure class="image"><img src="Piet-4.gif" alt="An image designed in the style of Mondrian&apos;s geometric art, featuring colored rectangles (red, cyan, yellow, blue, magenta, and pink) separated by black lines on a white background"><figcaption class="text-center mt-1">Sample Piet program</figcaption></figure><p>Piet differs from nearly every other language because it is expressed not as text but as colored blocks. Importantly, Piet does not use a simple one-to-one mapping of operations onto colors such as "add is green and divide is red" for that would be trivial and uninteresting. Rather operations are represented as changes in hue and lightness. It is a stack oriented language with low-level operations on par with many Assembly languages. To the left is a sample Piet program, created by Thomas Schoch. It prints "Piet!", and is designed to look like a painting of Piet Mondrian. The image is <strong>literally</strong> the program. If you are interested in how the language works, I suggest you take a peek at the <a href="http://www.dangermouse.net/esoteric/piet.html">specification</a>, because there is much more to it than I presented here.</p><p>Often a piece of code is referred to as beautiful or ugly; however, what the speaker usually means is that the algorithm, idea, or strategy behind the code is beautiful or elegant. Rarely can you present a piece of code and speak of the code itself as possessing the aesthetic qualities we associate with things of real beauty.</p><figure class="image"><img src="beauty_0206.webp" alt="Tree diagram showing a hierarchical directory structure with numbered nodes 1-6 at the top, branching down through multiple DIR (directory) nodes containing entries like &apos;A&apos;, &apos;B&apos;, &apos;fish&apos;, and &apos;tuna&apos;, ending with FILE nodes at the bottom"><figcaption class="text-center mt-1">A tree diagram of Subversion's Delta Editor from Beautiful Code</figcaption></figure><p>This is a figure from Karl Fogel's chapter in Orielly's book <a href="http://beautifulcode.oreillynet.com/">Beautiful Code</a> on Subversion's Delta Editor (full chapter <a href="http://www.red-bean.com/kfogel/beautiful-code/bc-chapter-02.html">here</a>). Is the beauty of that tree diagram not immediately apparent to you? Don't worry, it wasn't obvious to the author either.</p><blockquote><p>I cannot claim that the beauty of this interface was immediately obvious to me. - Karl Fogel</p></blockquote><p>Piet is an attempt to provide a functional language that can be used to create beautiful programs in the style of modern abstract painters. The <a href="http://www.dangermouse.net/esoteric/piet/samples.html">sample programs</a> page has quite a few examples of programs written in Piet. Some are beautiful and others are downright ugly.</p><p>Take these two programs:</p><p><img class="inline" src="Piet_hello_big.png" alt="Crude composition of large colored blocks in red, blue, green, yellow, pink, and gray"> <img class="inline" src="hw1-11.gif" alt="Colorful pixelated pattern with multicolored squares and a black star shape in the center"></p><p>They both print the string "Hello World", yet one is obviously more appealing than the other. Furthermore, the aesthetic properties of each program can be judged by any layman.</p><p>Intrigued by this concept of beautiful programs, I set out to create such a program in Piet, but I soon realized that in breaking out of the confines of the text editor Piet had nowhere else to land. That is, conventional image editors (such as the GIMP) are not suited to creating Piet programs. This is because operations in Piet are defined as <em>changes</em> between hue and lightness, which means to know which operation one particular pixel represents you must know the operations of every preceding pixel. Without assistance from the editor creating a program of any non-trivial length is extremely difficult. You are unlikely to understand even a non-trivial program after putting it down for several days.</p><p>There are <a href="http://www.dangermouse.net/esoteric/piet/tools.html">a couple</a> development tools for Piet out there, but most are incomplete or are suffering from bit rot. Given the graphical nature of Piet it only makes sense that the IDE and debugger should be graphical as well. With this in mind and convinced that Piet could actually be a usable langauge given the right environment, I have developed an IDE and debugger.</p><p><a href="http://github.com/Ramblurr/PietCreator/wiki">Piet Creator</a> lives on github. It is written in C++ w/ Qt, so should compile on Windows, Mac, and Linux. It has only been tested on Linux.</p><p>The application is named <a href="http://github.com/Ramblurr/PietCreator/wiki">Piet Creator</a> and the <a href="https://github.com/Ramblurr/PietCreator">source</a> is up on github released under the GPL v3. It is written in C++ with Qt, so it should run on Linux, Mac, and Windows platforms, but I have only tested in on Linux. For the backend interpreter it uses a slightly modified version of the fantastic Piet interpreter <a href="http://www.bertnase.de/npiet/">npiet</a> written by Erik Schoenfelder.</p><p><img class="inline" src="pietcreator4.webp" alt="Screenshot of Piet Creator in development mode"> <img class="inline" src="pietcreator3.webp" alt="Screenshot of Piet Creator in development mode"></p><p>Practically speaking I realize Piet and Piet Creator are probably useless. Though, I am convinced many programmers take themselves too seriously, so Piet Creator is a serious exercise in not being serious.</p><p>I have grand (if silly) visions for Piet and Piet Creator: sub-procedures, standard libraries, and arbitrary color sets. Imagine a world where programmers aren't viewed as digital carpenters or engineers pushing bits around to some functional end, but intriguing artists slinging color across a digital canvas creating functional art appreciable by all.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Fprogramming-as-modern-art%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Fprogramming-as-modern-art%2F">Bluesky</a></p>]]></content></entry><entry><title>Life in the Cloud</title><link href="https://casey.link/blog/life-in-the-cloud/" rel="alternate" type="text/html" /><id>tag:casey.link,2008-03-23:/blog/life-in-the-cloud/</id><published>2008-03-23T00:00:00Z</published><updated>2025-06-01T00:00:00Z</updated><author><name>Casey Link</name></author><summary>Internet technology is seeing an overall trend towards increased connectivity (always on), information sharing (openness), and most importantly, data existing in "the Cloud" (anywhere access)</summary><content type="html"><![CDATA[<div><p>Internet technology is seeing an overall trend towards <strong>increased connectivity</strong> (always on), <strong>information sharing</strong> (openness), and most importantly, <strong>data existing in "the Cloud"</strong> (anywhere access). This is a fascinating prospect, because no one can tell what the results will be.</p><p>Consider: <a href="http://www.amazon.com/Kindle-Amazons-Wireless-Reading-Device/dp/B000FI73MA">The Amazon Kindle</a>. At the core this device does something that many devices have done before, that is, it offers always on transparent connectivity, but it took this commonplace technology and applied it to an entirely different market.</p><p>Technology migration between markets is common enough, so why is this case special? Because the Kindle did not just take an existing technology. It took a technology and gave users access to the <em>Cloud.</em></p><h2 id="what-is-this-cloud-thing-anyway?">What is this Cloud thing anyway?</h2><p>Many people are already accustomed to the idea of a global village facilitated through the internet. In fact, Herbert Marshall McLuhan coined "global village" in his 1962 work <a href="http://en.wikipedia.org/wiki/The_Gutenberg_Galaxy">The Gutenberg Galaxy</a>, describing how electronic media dissolves traditional communication barriers between humans. This creates a world where there are virtually no limits, and people can exchange ideas, share knowledge, and provide services on a global scale. The internet has enabled all of these feats, yet our digital interactions remain constrained by physical boundaries.</p><p>Playing off the village metaphor, this physical limit could be analogized as personal houses in the cyber-village. At your metaphorical house is everything important to you. You have</p><ul>
	<li>a closet full of email</li>
	<li>stacks of MP3s and movies</li>
	<li>filing cabinets full of documents from your first 9th grade paper</li>
	<li>your finances</li>
	<li> your latest business model</li>
</ul><p>Not to mention the room dedicated to your photo albums and the basement/workshop with toolboxes containing:</p><ul>
	<li>word processors</li>
	<li>media players</li>
	<li>text editors</li>
	<li>photo editing tools</li>
</ul><p>In real life, of course, this house is your computer. Rather, it is probably a couple of computers: at work, home, or in your pocket. The thought of losing your virtual house to any number of possible disasters (e.g., theft, crackers, disk corruption) is devastating. The dread is a consequence of the technology shift. An information technology shift from the analog to the digital, from physical to abstract. We're approaching an era where this digital house dissolves entirely. Where your information exists everywhere and nowhere, accessible from any device, anywhere.</p><p>The Kindle represents an early glimpse of this transformation. With the Kindle you have access to books, newspapers, blogs, Wikipedia, and the internet in general, all without connecting to a computer.</p><p>With the Kindle you have access to books, newspapers, blogs, Wikipedia, and the internet in general, all without connecting to a computer. You can buy books through the Kindle store and while they are sent to your device Amazon keeps a copy of everything you purchase (subscriptions, books, magazines) in the Cloud. Not only that but the Kindle automatically sends your notes, annotations, clippings, and bookmarks in the Cloud.</p><p>But this post isn't supposed to be Kindle advertisement. What the Kindle lacks is openness. Being wrapped in layers of DRM forces the Kindle into not sharing nicely with others.</p><p>Openness will be crucial to life in the Cloud. As we move our data and applications into this new distributed model, we need assurance that our information won't be locked into proprietary silos controlled by a single company. The free and open source software movement has already demonstrated the power of collaborative development and shared standards. These are principles that become even more vital when our digital lives exist across multiple platforms and services.</p></div><p style="margin-top: 2em; font-size: 0.875em; color: #71717a;">Reply via: <a href="mailto:casey@outskirtslabs.com?subject=Re%3A+https%3A%2F%2Fcasey.link%2Fblog%2Flife-in-the-cloud%2F">Email</a> · <a href="https://bsky.app/intent/compose?text=%40casey.link+https%3A%2F%2Fcasey.link%2Fblog%2Flife-in-the-cloud%2F">Bluesky</a></p>]]></content></entry></feed>