Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)JU
Posts
5
Comments
1,663
Joined
2 yr. ago

  • Yeah I'm not saying everybody has to go and delete their infra, I just think that all new production environments should be k8s by default.

    The production-scale Grafana LGTM stack only runs on Kubernetes fwiw. Docker and VMs are not supported. I'm a bit surprised that Kubernetes wouldn't have enough availability to be able to co-locate your general workloads and your observability stack, but that's totally fair to segment those workloads.

    I've heard the argument that "kubernetes has more moving parts" a lot, and I think that is a misunderstanding. At a base level, all computers have infinite moving parts. QEMU has a lot of moving parts, containerd has a lot of moving parts. The reason why people use kubernetes is that all of those moving parts are automated and abstracted away to reduce the daily cognitive load for us operations folk. As an example, I don't run manual updates for minor versions in my homelab. I have a k8s CronJob that runs renovate, which goes and updates my Deployments in git, and ArgoCD automatically deploys the changes. Technically that's a lot of moving parts to use, but it saves me a lot of manual work and thinking, and turns my whole homelab into a sort of automated cloud service that I can go a month without thinking about.

    I'm not sure if container break-out attacks are a reasonable concern for homelabs. See the relatively minor concern in the announcement I made as an Unraid employee last year when Leaky Vessels happened. Keep in mind that containerd uses cgroups under the hood.

    Yeah, apparmor/selinux isn't very popular in the k8s space. I think it's easy enough to use them, plenty of documentation out there; but Openshift/okd is the only distribution that runs it out of the box.

  • Yeah, that's fair. I have set up Openshift Virtualization for customers using 3rd party appliances. I've even worked on some projects where a 3rd party appliance is part of the original spec for the cluster, so installing Openshift Virtualization to run VMs is part of the day 1 installation of the Kubernetes cluster.

  • Sure!

    I haven't used quadlets yet, but I did set up a few systemd services for containers back in the day before quadlets came out. I also used to use docker compose back in 2017/2018.

    Docker compose and Kubernetes are very similar as a homelab admin. Docker compose syntax is a little less verbose, and it has some shortcuts for storage and networking. But that also means it's less flexible if you are doing more complex things. Docker compose doesn't start containers on boot by default I think(?) which is pretty bad for application hosting. Docker-compose has no way of automatically deploying from git like ArgoCD does.

    Kubernetes also has a lot of self-healing automation, like health checks that can either disable the load balancer and/or restart the container if an app is failing, automatic killing of containers when resources are low, preventing the scheduling of new containers when resources are low, gradual roll-out of containers so that the old version of a container doesn't get killed until the new version is up and healthy (helpful in case the new config is broken), mounting secrets as files in a container, and automatic retry on failed containers.

    There's also a lot of ubiquitous automation tools in the Kubernetes space, like cert-manager for setting up certificates (both ACME and local CA), Ingress for setting up reverse proxy, CNPG for setting up postgres clusters with automated backups, and first-class instrumentation/integration with prometheus and loki (both were designed for kubernetes first).

    The main downsides with Kubernetes in a homelab is that there is about a 1-2GiB RAM overhead for small clusters, and most documentation and examples are written for docker-compose, so you have to convert apps into a Deployment (you get used to writing deployments for new apps though). I would say installing things like Ingress or CNPG is probably easier than installing similar reverse-proxy automations on Docker-compose, though.

  • Yeah, definitely true.

    I'm a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it's super helpful to use Kubernetes in a homelab even if you only have one node.

  • Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.

    The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.

  • I'm a DevOps/ Platform Engineering consultant, so I've worked with about a dozen different customers on all different sorts of environments.

    I have seen some of my customers use nested VMs, but that was because they were still using VMware or similar for all of their compute. My coworkers say they're working on shutting down their VMware environments now.

    Otherwise, most of my customers are running Kubernetes directly on bare metal or directly on cloud instances. Typically the distributions they're using are Openshift, AKS, or EKS.

    My homelab is all bare metal. If a node goes down, all the containers get restarted on a different node.

    My homelab is fully gitops, you can see all of my kubernetes manifests and nixos configs here:

    https://codeberg.org/jlh/h5b

  • I'm not saying it's bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.

    All new build-outs are gitops and containerd-based containers now.

    For the legacy VM appliances, Proxmox works well, but there's also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.

    If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.

  • definitely try out NixOS, it actually let's you modify more things than any other Linux distro, even Arch.

    Want to compile a custom kernel? Just modify the kernel setting to use custom compile options, and it'll compile a custom kernel for you and install it when you run a system update.

  • Keep the synology as a high-uptime nas.

    Im not sure which Linux district is best for gaming and running docker with an nvidia card. I just run NixOS for everthing, but that requires a lot of coding.

    You should be able to keep running Linux mint on there. You can install steam and Delfin for jellyfin on there, both are available as flatpaks. You can connect the nas using NFS to /mnt in the machine using the fstab. For running the jellyfin server, I would install it using docker compose and put all the files for jellyfin in the NAS.

    If you dont want to game, you could Sell the GTX 1080 and buy an Intel a310 for €100 (faster for transcoding and better Linux support).