Skip Navigation

Posts
8
Comments
253
Joined
2 yr. ago

  • Could you detail how you would do this?

    I would re-read all docs about podman networking, different network modes, experiment with systemd PrivateNetwork option, re-read some basic about network namespaces, etc ;) I have no precise guide as I've never attempted it, so I would do some research, trial and error, take notes, etc, which is the stage you're at.

    Edit: https://www.cloudnull.io/2019/04/running-services-in-network-name-spaces-with-systemd/,https://gist.github.com/rohan-molloy/35d5ccf03e4e6cbd03c3c45528775ab3, ...

    Could you confirm if one can reach one’s containers on the loopback address in a separate network namespace on podman?

    I think each pod uses its own network namespace [1]. You should check the docs and experiment (ip netns, ip addr, ip link, ip route...).

    I think it's doable, but pretty much uncharted territory - at least the docs for basic building blocks exist, but I've never come across a real world example of how to do this. So if you go this way, you will be on your own debugging, documenting and maintaining the system and fixing it when it breaks. It will be an interesting learning experiment though, hope you can document and share the outcome. Good luck!

  • how do I programmatically programmatically utilise sockets for containers to communicate amongst each other?

    Sockets are filesystem objects, similar to a file. So for 2 containers to access the same socket, the container exposing the socket must export it to the host filesystem via a bind mount/volume, and the container that needs read/write on this socket must be able to access it, also via a bind mount. The user ID or groups of the user accessing the socket must be allowed to access the socket via traditional unix permissions.

    Again, I personally do not bother with this, I run the reverse proxy directly on the host, and configure it to forward traffic over HTTP on the loopback interface to the containers. [1] [2] [3] and many others lead me to think the risk is acceptable in my particular case. If I was forced to do otherwise, I would probably look into plugging the RP into the appropriate podman network namespaces, or running it on a dedicated host (VM/physical - this time using SSL/TLS between RP and applications, since traffic leaves the host) and implementing port forwarding/firewalling with netfilter.

    I have a few services exposing a unix socket (mainly php-fpm) instead of a HTTP/localhost socket, in this case I just point the RP at these sockets (e.g. ProxyPass unix:/run/php/php8.2-fpm.sock). If the php-fpm process was running in a container, I'd just export /run/php/php8.2-fpm.sock from the container to /some/place/myapp/php.sock on the host, and target this from the RP instead.

    You need to think about what actual attacks could actually happen, what kind of damage they would be able to do, and mitigate from there.

    how I can automate the deployment of such proxies along with the pods

    That's a separate question. I use ansible for all deployment/automation needs - when it comes to podman I use the podman_container and podman_generate_systemd modules to automate deployment of containers as systemd services. Ansible also configures my reverse proxy to forward traffic to the container (simply copy files in /etc/apache2/sites-available/...; a2enconf; systemctl reload apache2). I have not used pods yet, but there is a podman_pod module. A simple bash script should also do the trick in a first time.

  • It's not possible to mount NFS shares without root (a rootful container would work but I don't recommend it). Docker allows it because it implicitly runs as root. Cleanest solution is to mount it from the host's fstab and use a bind mount.

  • Is the fact that I mentioned ChatGPT setting a wrong impression?

    Not at all, but the fact that it suggested jumping straight to k8s for such a trivial problem is... interesting.

    how using Unix sockets would improve my security posture here

    Unix sockets enforce another layer of protection by requiring the user/application writing/reading to/from them to have a valid UID or be part of the correct group (traditional Linux/Unix permission system). Whereas using plain localhost HTTP networking, a rogue application could somehow listen on the loopback interface and/or exploit a race condition to bind the port and prentend to be the "real" application. Network namespaces (which container management tools use to create isolated virtual networks) mostly solve this problem. Again, basic unencrypted localhost networking is fine for a vast majority of use cases/threat models.

  • I’m missing the point about a reverse-proxy being an SSL termination endpoint

    Yes, that's usually one of the jobs of the reverse proxy. Communication between the RP and an application container running on the same host is typically unencrypted. If you're really paranoid about a rogue process intercepting HTTP connections between the RP and the application container, setup separate container networks for each application, and/or use unix sockets.

    ChatGPT suggested I use Kubernetes

    wtf...

  • You use podman unshare to chown the directories to the appropriate UID/GID in the container's user namespace.

  • frequently updated

    Not something I'd want on my server :) Partly joking, their lifecycle makes sense if you stay on the major.minor release. Though I find 2 years security support is a bit short - Debian LTS is usually around 5 years, not an excuse to wait for the last moment to upgrade but I find it more comfortable than just 2 years.

    One thing to watch for is that alpine uses musl as it libc, and many programs expect glibc so you might run into obscure bugs. I find it good as base image for OCI images (again there are edge cases), but wouldn't use it for a general purpose server.

  • You technically can bind ports <1024 to unprivileged containers. echo 'net.ipv4.ip_unprivileged_port_start=0' | sudo tee /etc/sysctl.d/50-unprivileged-ports.conf; sudo sysctl --system. Though this will allow any user to bind ports below 1024, so it's not very clean.

    Another workaround is to redirect port 80 to 8080 (or other) through iptables and have your proxy listen on this port. sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080. Same thing for 443.

    As far as I know granting the CAP_NET_BIND_SERVICE capability to /usr/bin/podman does not work.

    Also podman-compose implementation is still incomplete, and I prefer using systemd units to start and manage containers. Check man podman-generate-systemd

  • the computer has room for one drive only

    The case might, but are you sure there isn't a second SATA port on the motherboard? In which case, and assuming you're using LVM, it would be easy to plug the 2 drives simultaneously while the case is open, create the appropriate partitions/LVM pvcreate/vgextend on the new drive, pvmove everything to the new drive, vgreduce/pvremove to remove the old drive, done.

  • Without more information, I'd say you're looking for podman run --volume /mnt/sdb/:/path/inside/your/container. Check the manpage for podman run

  • Graylog and elasticsearch might fit on that, depending on how much is already used, and if you set the heap sizes at their bare minimum... but it will perform badly, and it's overkill anyway if you just need this simple stat.

    I would look into writing a custom log parser for goaccess (https://goaccess.io/man#custom-log) and let it parse your bridge logs. This is how the geolocation section looks in the HTML report (each continent can be expanded and it will reveal the stat by country).

    I update the report every hour via cron, as I don't need real-time stats (but goaccess can do that).

  • you should really upgrade soon

    Debian stable has podman 4.3 and 4.4 is not in stable-backports

  • Quadlet

    Requires podman 4.4 though