'Abandonded Computer' by Rich Anderson | Source
Computing for the Apocalypse
As I wrote about recently, I am not a full-stack developer. I have spent over a decade focusing on static-site generators like Jekyll, Hugo, and 11ty and other front-end work. All of my work online is hosted with services like GitHub Pages, Netlify, or Vercel. I'm grateful for these services, because they're free and accessible. But these are still platforms I don't which could close shop anytime. It's a fantastic way to get started, but it ultimately goes against the IndieWeb ethos.
I have over two dozen different projects online with Netlify, for instance, ranging from my main site https://brennan.day to a bunch of other projects, which you can view in my portfolio. It's extremely cool how easy it is to take a git repo full of my files and have it online in a matter of minutes.
But Netlify is far from perfect. Does anybody remember GatsbyJS? I was so excited to have a static-site framework that used React (mostly because everybody told me I needed to learn React to be employable, but that's beside the point). After Gatsby was bought by Netlify, the project died. It was clearly an acquihire but never stated as such.
This is just one problematic example of many, and proof that I need to start seriously thinking about running and hosting my own sites on my own bare metal. And that requires learning the back-end in earnest.
Why now? I certainly don't need to remind you that our world is in an fragile and unstable state right now. It would do everyone good to learn how to grow your own vegetables, sew your own clothing, learn proper first aid, and cultivate your own sourdough starter. Regardless of the state of the world, homesteading and permaculture are interesting and useful hobbies.
Permacomputing
I'm not the first person to work on the concept of digital homesteading, the idea of permacomputing has been around for a long while. I especially enjoy Devine Lu Linvega's write-up on the subject.
The term "permacomputing" was coined in 2020 by Ville-Matias "Viznut" Heikkilä. Permaculture finds clever ways to let nature do the work with minimal artificial energy, and permacomputing asks: how do we make the most of existing computational resources rather than constantly demanding more power?
Maximize hardware lifespan, minimize energy usage, and use computation only when it has a strengthening effect on ecosystems. Instead of throwing away old hardware, find new purposes for it.
Permacomputing in the Wild
Hundred Rabbits, a two-person collective living and working from a sailboat, are pioneers of permacomputing. In 2017, while trying to download a 10GB Apple Xcode update using only 5GB SIM cards, Devine Lu Linvega and Rekka Bellum had to put their iPhone in a bag and hoist it up the mast. This absurdity led them to create Uxn, a tiny virtual machine that runs on everything from Game Boy Advance to Raspberry Pi Pico.
The first question permacomputing asks is: "Where is technology not appropriate? Where can it be removed?" Technology gets sold as a timesaver but just adds complexity and creates dependency on supply chains.
My project asks the question: what can I run myself, on hardware I control, that will work when third-party services don't?
The community has developed design principles founded on permaculture's ethics: Earth Care, People Care, and Fair Share. Frugal computing means familiarizing yourself with using as little as needed while resources are available, similar to learning to use a first aid kit while still living in the city. You practice when correcting mistakes is still feasible.
Permacomputing exists as two intertwined strands: an incentive to reuse and repurpose existing technology, and evolving design principles to guide that reuse.
Actual Logistics
Despite graduating in April, I still have quite a few unused perks in my GitHub Student Developer Pack (which, by the way, you should definitely apply for if you're in school, even if you're not into computer science or programming). One of these perks is $200 in credits on DigitalOcean for a year. I've tried out DigitalOcean in the past, but don't currently use it for anything because it is overkill for my simple projects.
But this was a perfect opportunity for me to finally learn about cloud computing, databases, and devops with Docker and Caddy and break things without worry before transitioning over to using my own actual machines. My credits roughly translate to ~$16/mth which would buy me: 2 GB Memory / 1 Intel vCPU / 70 GB SSD Disk / running Ubuntu 24.04 (LTS) x64.
Going back to digital homesteading, my idea is this: I want to have a homelab monorepo which uses several different software applications in containers with Docker. I want to primarily have a machine that would take care of everything for me in the event that I have electricity but no Internet, but on the offchance the Internet survives the apocalypse, I also want community features as well.
Here's what I have, so far:
Core Infrastructure
- brennan.page - Landing page with service status dashboard
- docker.brennan.page - Portainer Docker management UI
- monitor.brennan.page - System monitoring with real-time stats
- files.brennan.page - FileBrowser file management interface
- wiki.brennan.page - Git-backed static documentation wiki
Productivity
- tasks.brennan.page - Vikunja task management system
- notes.brennan.page - HedgeDoc collaborative markdown editing
- bookmarks.brennan.page - Linkding bookmark manager
- music.brennan.page - Navidrome music streaming service
Community Platforms
- blog.brennan.page - WriteFreely minimalist blogging platform
- forum.brennan.page - Flarum community discussion forum
- rss.brennan.page - FreshRSS feed aggregator
- share.brennan.page – Plik temporary file sharing
- poll.brennan.page – Rallly meeting/poll scheduler
Due to the rather weak specs, I didn't have enough overhead to implement certain software I wanted to, like Jellyfin for media, but this was certainly a good-enough start. The important thing was learning how to use all of this stuff, and keeping track of my learning with the wiki and monitoring that would be in the server itself.
Implementation
What started as a philosophical exploration quickly became practical education in backend development and systems administration. I broke the project into distinct phases.
Phase 1: Foundation Infrastructure
The first step was establishing the core infrastructure. This meant setting up Docker as the container runtime, Caddy as a reverse proxy with automatic HTTPS, and a few management tools like Portainer for Docker management and FileBrowser for file ops. I created a simple landing page for all services on the root domain.
Phase 2: Monitoring and Documentation
You can't manage what you can't measure. I implemented a monitoring system and a Git-backed wiki using MkDocs. Every configuration, troubleshooting step, and architectural decision is (supposedly) documented there.
Everything follows a local-first development workflow: I write content locally in Markdown, build it into a static site, and deploy it via rsync to the server.
Phase 3: Personal Productivity
I moved on to implementing tools for personal productivity. This required setting up a shared PostgreSQL database to serve multiple applications. I deployed Vikunja for task management, HedgeDoc for collaborative note-taking, Linkding for bookmark management, and Navidrome for music streaming. Each service is containerized with proper resource limits and network isolation.
Architecture and Constraints
Working with a 2GB RAM constraint taught me lessons in resource optimization. It took me back to the days I was a teenager using a similarly-specced machine bought from Kijiji. The infrastructure currently runs on approximately 800MB of RAM usage (40% of available allocation), leaving headroom for growth.
Resource Management Strategy
- Memory Limits: Every container has explicit memory limits to prevent runaway usage
- Shared Database: Instead of separate databases per service, I use a single PostgreSQL instance with multiple databases and users (though certain services require different database schemas)
- Lightweight Images: I prefer Alpine-based containers for their minimal footprint
- Network Segmentation: Docker networks are organized by function (caddy, internal_db, monitoring)
Security Considerations
The project implements good-enough security practices:
- SSH key authentication only (no password auth)
- UFW firewall with minimal open ports
- Docker network isolation between services
- Automatic SSL certificates via Caddy
Local-First
All configurations are written and tested locally before deployment. The Git repository serves as the single source of truth, with the server serving as a deployment target. Why?
- Version Control: Every change is tracked and can be rolled back
- Offline Development: I can work on configurations without internet access
- Disaster Recovery: The entire infrastructure can be rebuilt from the Git repository
Conclusion
My current system hosts 11 active containers serving 8 different services, all with HTTPS certificates and monitoring. Response times are consistently under 200ms, isn't that impressive?
It's nice that now all my data resides on infrastructure I control, and there's no third-party data collection or advertising tracking. I'm not locked into any particular service provider's roadmap.
My homelab embodies permacomputing principles without me even realizing it. Working with a 2GB RAM constraint made me think like those demoscene artists who squeeze work from restricted resources.
I want to create ecosystems of services that work together, and yet have systems that can operate independently. And like our real world, it's important to make efficient use of limited resources, and to use open-source tools that can be endlessly modified and extended for our unknown future.
Of course, this isn't actually everything. I am still reliant on domain name registrars, payment processors, and a thousand other things. I've already quoted Carl Sagan previously, that if you want to bake an apple pie, you must first invent the universe. There is a lot at play here that aligns with that thinking. On the other hand, there is already so much that's here for us to use.
My homelab project is to take meaningful steps toward digital resilience, learning valuable skills, and building systems aligning with my values. In this world we share, having the knowledge and ability to maintain your own infrastructure is a radical act of hope.
Comments
To comment, please sign in with your website:
Signed in as:
No comments yet. Be the first to share your thoughts!