Show HN: I've built a nice home server OS (lightwhale.asklandd.dk)
ohai!
I've released Lightwhale 3, which is possibly the easiest way to self-host Docker containers.
It's a free, immutable Linux system purpose-built to live-boot straight into a working Docker Engine, thereby shortcutting the need for installation, configuration, and maintenance. Its simple design makes it easy to learn, and its low memory footprint should make it especially attractive during these times of RAMageddon.
If this has piqued your interest, do check it out, along with its easy-to-follow Getting Started guide.
In any event, have a nice day! =)
Kudos to the great project!
But functionally, like you I find Ubuntu server fine. I run apt update and upgrade a couple times a year and its local only with tailscale access.
I find these immutable OS's really nice on laptop or desktop. The home directory is the only thing that can be written to so the OS is supposed to be more stable and can't break easily
My workstation is where I work, fiddle with things, experiment. It 's a workbench in the context of software development. So I need to be able to modify and configure everything, install things, uninstall them again. A can't use a workbench wrapped in cellophane.
I want my server stable and fixed. I don't care about the OS, I don't want to configure it, update it, and otherwise maintain it. It's just a platform to run the Docker Engine and my containers.
I assumes you've visited the Lightwhale website and at least read the headlines:
"No maintenance headaches. Just boot and focus on what matters!"
and the first section, including:
"Lightwhale lowers the entry barrier, removes tedious administration tasks, and opens a friction-free path to productivity"
But maybe you dismissed as being empty buzzwords, or didn't quite believe it, so you want an explanation. That's perfectly okay, allow me to elaborate:
So, if you've already gone though the process of partitioning, installing, and configuring a general purpose operating system, and then ran that "one line" that added the docker repository, installed docker, docker-compose, and buildx, and presumably also added yourself to the docker user group, then I'd say you've got a head start compared to a clean, bare-metal box. True, you're now off to the races.
Of course, all this work and time invested in your system comes with a tax called maintenance.
Everything you just installed is live. And now your job is to keep everything safe and sound. Not just the files in $HOME and /etc/, but also /usr/bin/docker, /usr/bin/bash, /lib/libc.so, and even /usr/bin/[. But that's just the rootfs. You also have to groom your boot partition. Even the partition table and MBR. I know, your installer and package manager handles that for you. But nevertheless, all this is writable and either imposes an attack surface, is at risk of daily butter fingers breaking things, or lives on a filesystem that may introduce silent bitrot or on a media that can fail. But that's not the end of it. If do a good job, your server will live a long and happy life. And eventually, you get to experience something remarkable: It dies. I'm not talking about the hardware suffering a fatal crash, no; I'm taking about Canonical deciding that your version of Ubuntu is now suddenly going into End Of Life. You now have choices to make: Should you upgrade and leave your system at the mercy of the numerous upgrade scripts? Should you reinstall to get a clean slate, repeat everything mentioned above, and try to migrate your data configuration, containers, volumes, etc? Or maybe just leave the OS there dead; afterall, you're only interested in your docker containers and they're unaffected, so that seems like a valid path. Except every time you log in, it's like stepping over a dead body in the doorway, and that doesn't feel quite right.
Of course, if you find these administration tasks fun and interesting that's perfectly okay. And honestly, I can absolutely relate to that, because I've been doing it as a hobby since forever, and I enjoyed myself. Until I reached a point where the installer and maintenance didn't interest me as much as just running the software.
Now, look back at my taglines above, and I hope you can see how Lightwhale can benefit you in setting up a home server:
If you want less tedious administration and maintenance, but instead like a friction-free path to just boot and run your software, then Lightwhale is worth a try.
I've long since thrown everything with a user count > 1 out.
Of course nothing is. But there's a reason projects like "Talos" do exist: no terminal, no SSH, no package manager (how do we like package managers like NPM lately btw?), read-only filesystem, definitely no systemd, etc.
And then a minimal number of executables.
This does, definitely, reduce the attack surface.
I'm not speaking about this Show HN's project but there are such things as systems both more secure and requiring less maintenance than others.
Throwing in the towel and saying: "nothing can ever be 100% secure so we'll always need to patch so we may as well YOLO by accepting npm packages modified 3 minutes ago" is not the way to go forward either.
Talos on IncusOS is likely a very interesting stack that I intend to play with hopefully in the near future.
https://linuxcontainers.org/incus-os/docs/main/
First time I heard someone call it blue-green OS updates instead of A/B OS updates.
Same concept, I guess. I'm a platform engineer / SRE, and blue/green is a more common way of describing that way of deploying applications so I didn't even consider it could have a different name on the OS layer.
Obviously the software you run needs upgrades, but (again, but a layer down) it's based on Docker and probably someone else is maintaining it. So you pull that new container, restart and the OS is just making sure your data lands in the same place with the new container.
If you're happy with all your software running from Docker this seems like a step up from a Debian or Redhat, and it has a lot less bureaucracy than something like CoreOS.
Whether it's _usable_ I'm not sure (especially around storage management) but it's a really clear pitch.
Lightwhale is perhaps more of an "expert system" compare to consumer products that come preconfigured with a UI. On the other hand, if you want to try a server where you are in full control, Lightwhale it is the easiest way to get a pure Docker server up and running.
The way to interact with the OS is significantly different from almost all other Linux distros. There is no shell, no DE. This feels like a lot more than "a custom paint job".
Is Ubuntu an OS? Mint?
Neither have built the package managment system, or the kernel, the DE(s), the utilities (maybe some but certainly not all).
What about CentOS? Or Bazzite? Or even Android?
Is macOS an OS, or "a custom BSD distro"?
And if none of those are OSs, does a Linux-based OS even exist? If not, what's the point of the distinction?
> Linus Torvalds and DHH are the only two arrogant people in tech
Are you feeling ok bud?
This is what Wikipedia says about NixOS:
> NixOS is a Linux distribution built around the Nix package manager.
This is about CentOS:
> CentOS (from Community Enterprise Operating System; also known as CentOS Linux) is a discontinued Linux distribution that provided a free and open-source community-supported computing platform, functionally compatible with its upstream source, Red Hat Enterprise Linux (RHEL).
[0]: https://distrowatch.com/
[0]: https://www.gnu.org/gnu/linux-and-gnu.html
> Can you please add wget, nano, $my_fav_app_omg_i_love_it to the root filesystem?
> No, not likely.
I am guessing the way to use software not already in the image is to use `docker run`.
Of course you can choose a general purpose Linux dist. But you'll still have to go through the extra work of partitioning the disk, installing the system, selecting packages, possibly follow some other guide to install Docker Engine, Docker Compose, Docker Buildx, and doing that thing with the docker group. At that point you'll pretty much have what Lightwhale provides out of the box. Except you'll be left with a system that isn't immutable, and where system and data is entangled on one filesystem. And if that's your thing, you should do definitely that. But...
If you just want to run your containers, then Lightwhale is best in class.
I'm getting ready to launch an online game and I'm dealing with "how do I just run my game server on dozens of boxes without dealing with linux stuff".
I don't really have an answer yet (leaning into "just get one really powerful box" lol), but my investigation into the problem so far has been pretty interesting.
You can conceptualize the "my program + the OS" as a single program. It's not a pretty picture. Lots of global mutable state. (Also it randomly modifies itself??)
The whole point of Docker appears to be "I just want to run my program", in the least painful way possible. Immutable Linux extends the "lean in the direction of sanity" idea. (The programming and OS worlds seem to be learning the same lessons, from different angles.)
And then there's "it turns out the OS solves problems I don't have, while creating many new problems", which leads to Unikernels. Fun stuff ;)
In a perfect world, I wouldn't need the OS at all. Docker gives me two Linuxes to worry about! The number of operating systems I want to worry about is zero!
Which brings us to Unikernels! Just ditch the OS! Technically the right answer, except... now I'm a kernel developer? Maybe that's the least bad option, long term.
A good first question to ask yourself is why you need to run it on dozens of boxes. You probably don't.
The point of Docker is not "I just want to run my program", the point is to bundle an application with its dependencies. It's one way to distribute applications, and far from the only one (despite what talking to some people might make you think).
As for the last part of your post, none of it is correct. Docker is not a "second linux to worry about" and considering unikernels in your use case is insane.
Terry Davis once said that "an idiot admires complexity, a genius admires simplicity". You say you're "getting ready to launch an online game", then launch it. The best way to do that is the simplest way, which in my opinion is running it as a systemd service on _one_ Linux VM. When that actually creates problems for you, solve those problems, and only those problems.
Well, that's several problems, of course :) but one of them is latency and capacity (i.e. servers).
My actual dependency is whatever it takes to get data in and out of a network card.
You get a minimum effort OS to host your game on, and the option to distribute it.
first read looks good, excited to try.
One nice thing to keep in mind, is that all data that is ever written in Lightwhale, exists in /mnt/lightwhale-data/lightwhale-state. Data is not entangled with the rest of the system in the rootfs.
The tricky part perhaps is that you should sync all container data to disk prior to taking the backup (or snapshot) to be sure you have a clean state. And the only safe way I can think if, is to stop all containers first. But I think this goes for every Docker Engine.
The source repository isn't very enlightening?
> The actual repository here hosts the source code for Lightwhale, and is not of any interest for most people.
> https://bitbucket.org/asklandd/lightwhale/src/master/
https://bitbucket.org/asklandd/lightwhale/src/3.0.0/docs/BUI...
I don't consider Lightwhale as an alternative to Proxmox. In fact, how do you even run a Docker container in Proxmox? Without booting Lightwhale in a VM first, I mean? ;)
Or if not proxmox, without a http GUI, just a boring debian stable x86-64 system to manually install QEMU and virt-tools, virsh toolset on to run QEMU/KVM things on with purely CLI management.
This is an interesting general concept but being limited to only running docker containers is a huge constraint.
Lightwhale does one thing and it does it great: It lets you run Docker containers effortlessly. And that is it. If that's not what you want, you honestly should run something else — no hard feelings =)
And I don't think you can get there via this route. But good luck anyway, I would love to be proven wrong.
Not a huge criticism, life is about choices.
I went with Btrfs for persistence, and automated some things around that. For example, if you give Lightwhale two magic disks, it will automatically create a Btrfs RAID1. During persistence setup it will also create a few default subvolumes to fully support snapshots and rollback of the entire data filesystem. (Remember, the rootfs of the OS is still immutable, and is never part of the data filesystem). Besides snapshots and RAID, Btrfs has checksums which I think is a must.
I've heard lots of nice things about zfs, and I know it does snapshots and checksum too. But also that it eats huge amounts of memory for breakfast. I may not be updated on this, but I faintly remember some licensing issues, that potentially could cause problem if zfs was baked into an ISO like Lightwhale. Those are the main reasons why I'm reluctant to zfs and chose Btrfs.
But you're absolutely right, I have taken some radical choices with this dist. But most are deliberate and by design =)
- Flatcar Container Linux: An open-source, immutable OS designed for automatic updates and large-scale container deployments.
- Fedora CoreOS: A, secure, automatically updating operating system designed for running containerized applications, succeeding the original CoreOS.
- Talos Linux: A modern, immutable, security-focused OS dedicated entirely to Kubernetes.
- IncusOS: an immutable OS solely designed around safely and reliably running Incus.
I think you need to more clearly explain how this is different. Again, congrats on the launch though.
Migrated from Proxmox and manage all my VMs. Heavily use coding assistants to automatically set things up through the IncusOS CLI, translate Docker-Compose images to Incus, write bash scripts to automate launching new containers to use `--dangerously-skip-permissions` without fear of repercussions, etc.
What I love the most about it is that it's possible to manage IncusOS with declarative files, so you always have visibility into networking setups, resource configuration, etc.
Highly recommend checking IncusOS out if you have similar use cases!
My gut feeling is that enterprise sentiment is leaning heavily towards Proxmox, fuelled by a VMware exodus that will only gain speed, and I don't see Incus really meeting the requirements most people have that previously used VMware, but of course Incus is awesome and you can't always pick technologies by what will be "employable" :-)
I don’t really care for enterprise support. Incus hits a sweet spot no other solution does.
Why do I need immutable if I'm just running docker?
Why do I need a specialized Debian variant when I can install docker on Debian or Ubuntu in a couple minutes?
And maintenance happens directly through the package manager, either through the distro maintained repo, or by adding the official docker repos?
This immutable fad needs to go away. So does flatpak and snap.
Linux already does the things these "solutions" are trying to solve.
Users can't update the base system without root, and applications should be installing dependencies in /usr/lib
It is also the insurance that I will get help whenever I'm stuck.
Sure it could be smaller ... but when it already runs fine on any hardware, even weird stuff like a BananaPi with a low-end RISC-V processor, then I have a difficult time wanting anything else.
I tried to outline the project features in brief at the start of the page. I'm sorry if I didn't communicate it clearly. Feel free to pinpoint where you were thrown off.
You need immutability so you don't have to worry/waste time on maintaining the system, and instead can focus on your containers.
Lightwhale isn't a special Debian variant. It's built ground up with Buildroot. It is literally purpose-built for this task.
A package manager doesn't remove the burden of maintenance, it just makes it easier. But it's still maintenance. I'm basically arguing it's unnecessary as long as you have a Docker Engine.
Snap and flatpak, while totally different concepts, I agree.
Linux (or GNU) doesn't solve any of this by itself.
True, the root account is a layer of security. But there are still may other problem even attack surfaces on a system.
I did a different explanation here that is easier to relate to, perhaps: https://news.ycombinator.com/item?id=47932066
So first of all, you don't install it; you just boot it. And because Lightwhale doesn't write anything to disk unless you explicitly tell it to, (https://lightwhale.asklandd.dk/#persistence-enable), then you're safe to boot it any time and check it out. You can use a VM to be sure: https://lightwhale.asklandd.dk/#faq-virtualiuzed
Secondly, I dont feel there's much to see, really. It's just classic Linux text mode, although I do find the GRUB splash and getty login screens quite cool. Instead, I focused on explaining what it does and how it makes your life better.
But since you asked for screenshots, here you go:
https://lightwhale.asklandd.dk/screenshots/boot.png https://lightwhale.asklandd.dk/screenshots/login.png
Enjoy!