Skip Navigation

User banner
Posts
15
Comments
903
Joined
2 yr. ago

  • To me, the key difference is just how much you can be yourself around that person, without any feeling of self consciousness or shame. Even with very good friends, there are still things about yourself (physical or otherwise) that you don't let them see.

    Also, my wife IS my best friend.

  • Just the stuff that's being accessed directly, so if anything's only going to be accessed via your Traefik server from outside, leave them where they are. That way, any compromise of your Traefik server doesn't let them move laterally within the same VLAN (your DMZ) to the real host.

  • Right, then you'll probably want to do something similar to what I'm planning next, which is creating a small "DMZ" VLAN, for the public facing things, and being very specific about the ACLs in/out, default deny anything else.

    The few things I allow public access to are via Nginx Proxy Manager, using Authelia for SSO/2FA where applicable. I'm intending to move that container into a dedicated VLAN that only allows port 443 in from anywhere (including other VLANs), and only allows specific IP/port combinations out for the services it proxies.

    I don't even intend to allow SSH in/out for that container. I can console in from the Proxmox management console if required.

  • What would you do, for a basic homelab setup (Nextcloud, Jellyfin, Vaultwarden and such)?

    I guess my first question is are you intending to open up any of these to be externally available? Once you understand the surface area of a potential attack, you can be a lot more specific about how you protect yourself.

    I have just about everything blocked off for external access, and use an always-on Wireguard VPN to access them when I'm not home. That makes my surface area a lot smaller, and easier to protect.

  • Yeah, we're pretty lucky here in Australia. I think many countries are lucky like this - people just need to be willing to explore what they already have.

  • I work in data centres, so fuck all windows or outside time during the week. Even on days I get to work from home, I'm chained to the computer.

    Evenings are gaming or watching TV with my wife. Weekends are projects on my house, yard or car, if the weather lets me. Otherwise, I'm tinkering with tools, electronics, and/or home automation.

    Longer bouts of time off are spent 4WDing and camping or caravanning. I go away 4WDing with mates at least annually, but we also have an off-road caravan, so I like to take my wife and daughter to new places.

    About to head off for four nights off-roading in one of my favourite parts of the state I Iive in: the Victorian High Country. Later this year, we're taking a three week trip to Uluru and back. Proper Aussie Outback trip.

  • Yeah, still got my ancient free Gmail account going. Will probably revert to that.

  • VLANs are absolutely the key here. I run 4 SSIDs, each with its own VLAN. You haven't mentioned what switch hardware you're using, but I'm assuming it's VLAN-capable.

    The (high-level) way I'd approach this would be to first assign a VLAN for each purpose. In your case, sounds like three VLANs for the different WLAN classes (people; IoT; guest) and at least another for infrastructure (maybe two - I have my Proxmox VMs in their own VLAN, separate to physical infra).


    VLANS

    Sounds like 5 VLANs. For the purposes of this, I'll assign them thusly:

    1. vlan10: people, 192.168.10.0/24
    2. vlan20: physical infrastructure, 192.168.20.0/24
    3. vlan30: Proxmox/virtual infra, 192.168.30.0/24
    4. vlan40: IoT, 192.168.40.0/24
    5. vlan50: guest, 192.168.50.0/24

    That'll give you 254 usable IP addresses in each VLAN. I'm assuming that'll be enough. ;)


    SWITCH

    On your switch, define a couple of trunk ports tagging appropriate VLANs for their purpose:

    1. One for your Nighthawk, tagging VLANs 10, 20, 40 and 50 (don't need 30 - Proxmox/VMs don't use wireless)
    2. One for your Proxmox LAN interface, tagging all VLANs (you ultimately want to route all traffic through OPNsense)

    If you had additional wired access points for your wireless network, you'd create additional trunk ports for those per item 1. If you have additional Proxmox servers in your cluster, ditto for item 2 above.


    WIRELESS

    I'm not that familiar with OpenWRT, but I assume you can create some sort of rules that lands clients into VLANs of your choice, and tags the traffic that way. That how it is on my Aruba APs.

    For example, anything connecting to the IoT SSID would be tagged with vlan40. Guest with vlan50, and so on.


    PROXMOX

    1. Create a Linux Bridge interface for the LAN interface, bridging the physical interface connected to SWITCH item 2, above
    2. Create Linux VLAN interfaces on the bridge interface, for each VLAN (per my screenshot example)

    You haven't mentioned internet/WAN but, if you're going to use OPNsense as your primary firewall/router in/out of your home network, you'd also create a Linux Bridge interface to the physical interface connecting your internet


    OPNSENSE

    This is the headfuck stage (at least, it was for me at first). Simply put, you need to attach the Proxmox interfaces to your OPNsense VM, and create VLAN interfaces inside OPNsense, for each VLAN.

    I'm not going to attempt to explain it in reduced, comment form - no way I could do it justice. This guide helped me immensely in getting mine working.


    If you have any issues after attempting this, just sing out mate, and I'll try and help out. Only ask is that we try and deal with it in comment form here where practical, for when Googlers in the future land here in the Fediverse.

  • It sounds like what you're looking to achieve is what's known as zero trust architecture (ZTA). The primary concept is that you never implicitly trust a particular piece of traffic, and always verify it instead.

    The most common way I've seen this achieved is exactly what you're talking about - more micro-segmentation of your network.

    The design principles are usually centred around what the crown jewels are in your network. For most companies applying ZTA, that's usually their data, especially customer data.

    Ideally you create a segment that holds that data, but no processing/compute/applications. You can also create additional segments for more specific use cases if you like, but I've rarely seen this get beyond three primary segments: server; database; data storage (file servers, etc).

    In your case, you can either create three separate VLANs on your Proxmox cluster, with your your OPNsense firewall having an interface defined in each, or use the Proxmox firewall. I'd go the former - OPNsense is a lot more capable than the Proxmox firewall, especially if you turn on intrusion detection.

    I'm not using any further segmentation beyond my VMs sitting in their own VLAN from my physical, but here's a screenshot of my networking setup on Proxmox. I wrote this reply to another post here on Selfhosted, talking about how my interfaces are setup. In my case, I have OPNsense running as a VM on the same Proxmox cluster. As I said in there, it's a bit of a headfuck getting it done, but very easy to manage once setup.

    BTW, ZTA isn't overkill if it's what YOU want to do.

    You're teaching yourself some very valuable skills and, and you clearly have a natural talent for thinking both vertically and horizontally about your security. This shit is gold when I interview young techs. One of my favourite interview moments is when I ask about their home setups, and then get to see their passion ignite when they talk about it.

  • At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.

    Yeah, I know this what I should do too. As someone else said in this comment thread, gotta tear that bandaid off at some point. Just shits me that I should have to. But the freedom after doing it... <*chef's kiss*>

  • Nah - don't make excuses for them. Here in Australia, we call entitled people like this cunts. With a hard 'c'. Not the nice one, with a soft 'c'.

  • Yeah, I had the same experience with the devs of Pushbullet, after constructively suggesting a few ways they might be able to work with proxy servers, and all I got back was "Proxies are bad, mmmmk?".

    Fucken Peter Pan-level mentality.

  • Yeah, that's the other thing that shits me. Paying for my wife and I on Workspaces, and we don't have family sharing rights. We're literally paying to be treated like second-class citizens!

  • Yep, all true. I was oversimplifying in my explanation, but you're right. There's a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

  • Yeah, I cam across this project a few months ago, and got distracted before wrapping my head around the architecture. Another weekend project to try out!

  • To answer each question:

    • You can run rootless containers but, importantly, you don't need to run Docker as root. Should the unthinkable happen, and someone "breaks out" of docker jail, they'll only be running in the context of the user running the docker daemon on the physical host.
    • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
    • It's the opposite - you don't really need to care about docker networks, unless you have an explicit need to contain a given container's traffic to it's own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

    I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I've created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

    It's not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

    Why? I like to play.

    Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let's say there's a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

    I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don't like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don't like about the new kid on the block.

  • I’d avoid Google, they don’t have a stable offering

    What you you mean by not stable?

    I've been (stuck with) Google Workspace for many, many years - I was grandfathered out from the old G-Suite plans. The biggest issue for me is that all my Play store purchases for my Android are tied to my Workspace's identity, and there's no way to unhook that if I move.

    I want to move. I have serious trust issues with Google. But I can't stop paying for Workspaces, as it means I'd lose all my Android purchases. It's Hotel fucking California.

    But I've always found the email to be stable, reliable, and the spam filtering is top notch (after they acquired and rolled Postini into the service).

  • Using CloudFlare and using the cloudflared tunnel service aren't necessarily the same thing.

    For instance, I used cloudflared to proxy my Pihole servers' requests to CF's DNSoHTTPS servers, for maximum DNS privacy. Yes, I'm trusting CF's DNS servers, but I need to trust an upstream DNS somewhere, and it's not going to be Google's or my ISP's.

    I used CloudFlare to proxy access to my private li'l Lemmy instance, as I don't want to expose the IP address I host it on. That's more about privacy than security.

    For the few self-hosted services I expose on the internet (Home Assistant being a good example), I don't even both with CF at all. use Nginx Proxy Manager and Authelia, providing SSL I control, enforcing a 2FA policy I administer.

  • Texted my wife to tell her I was heading to a mate's place for "a dip in the pool and some pizza", then followed up with a texted stream of consciousness, one line at a time, about how I was planning to eat the pizza - not dip in it, then pondering what dip on pizza would be like, followed be weighing up the pros and cons of about 4 or 5 different dips on pizza, and the different pizzas they might work on.

    It took about 7 or 8 messages before I got her eyeroll response. Worth it.