Last week I migrated the various web sites I host1 from Nginx to Caddy. I could nerd on about how this finally lets me offer HTTP/32 and means I’m using a daemon written in a memory safe language, but mostly it’s config language is different enough from Apache and Nginx I’ve been intrigued to try it.


On the old server, I was using Let’s Encrypt’s certbot to manage my TLS certificates. This was fine. The main thing that bothered me was using a wildcard cert out of laziness. With all the domains and subdomains I’m hosting pages for, I’d need to write scripts to automate the initial cert creation and generate the per site Nginx config. Look, it’s not a huge deal, an hour or so to get it all in in place, but my hobby projects are supposed to be fun and that did not sound fun. I’ve written enough scripts like those and the joy I found in solving that type of problem has gone stale. When you’ve got a kid, every minute of hobby time is precious.

Using a wildcard cert with Nginx meant simplifying the cert generation and the config (since many sites get to use the same ssl_certificate path). However, it also meant using the DNS challenge and using the DNS challenge meant making the server able to modify DNS! 😱3

With Caddy, I did nothing and got it all: creating any site config results in a TLS cert being provisioned and served. It’s amazing. Not only did I not need to provision certs, there’s not a single line of config telling the server where to find them.

I’ll get to snippets (the import) in a moment, but this is the entire config for a site with it’s own auto-renewing cert: {
        import archived-blog

Snippets and placeholders

The boilerplate and copypasta I’ve needed in Apache and then Nginx configs always bothered me. With Caddy snippets and Caddy placeholders I’ve been able to DRY up the config to a small number of (often just one) imports. It’s lovely. ✨

On Nginx, I used the include directive in order to reuse config as much as possible. This helps a lot, but you’re limited by the lack of variables to parameterize over and an inability to pass in arguments.

This means each site in Nginx needs a lot of specifics: document root, paths to the TLS files, and the paths to the logs.

server {

  include local/ssl.conf;
  ssl_certificate /etc/letsencrypt/live/;
  ssl_certificate_key /etc/letsencrypt/live/;
  ssl_trusted_certificate /etc/letsencrypt/live/;

  root /var/www/;
  include local/archived-blog.conf;

  access_log /var/log/nginx/;
  error_log /var/log/nginx/;

With Caddy I was able to make a single snippet that could be used repeatedly:

(archived-blog) {
        import default
        root /srv/www/{labels.1}.{labels.0}/{labels.2}/html
        file_server {
                index index.html index.xml
                hide wp-admin* wp-login*
        header Cache-Control max-age=31536000

The default snippet includes things every single site follows (like where logging goes). The root directive uses (a shortened version of) the {*} placeholder which splits up the hostname at the dots (0: “org”, 1: “theory”, 2: “more”) so we can parameterize where to find the files. The file_server and the header directives are specific to our notion of an “archived blog”.

Thus, configuring 5 sites turns into:,,,, {
        import archived-blog

In all, I went from about 880 lines of Nginx config to 375 lines of Caddy config. It’s a lot less to manage and think about and a lot easier to tweak.

Glorious! 🤩

  1. Yeah, I still host websites on my own server like the indieweb is living the dream of the early aughts. I’m wildly aware of how out of touch this is. Not only because all these sites could be freely hosted as static pages via a variety of services, but because all the search engine companies now view these pages as nothing more than data to plunder into their LLM models and never link back to. ↩︎

  2. Nginx gained support for QUIC and HTTP/3 in v1.25. Unfortunately, Debian 12 is on v1.22 and I have no desire to build/package/backport things. Been there, done that. I’m living the stable life (in this respect). ↩︎

  3. I generally try to act professional in my hobbies, but as hobbies are first and foremost for fun, I’ll take calculated risks to keep them so. The thinking went: (1) It’s unlikely this server is a specific target, so I’ll only worry about the script kiddies. (2) The attack surface is narrow: I only expose Postfix and Nginx and keep them patched. (3) The DNS token is IP restricted.

    I felt this boiled down to: am I worried about someone hitting me with a code execution zero day for Postfix or Nginx? No. ↩︎