Skip Navigation

User banner
Posts
0
Comments
55
Joined
1 yr. ago

  • It's definitely encrypted they can just tell by signature that it is wireguard or whatever and block it.

    They could do this with ssh if they felt like it.

  • Usually a reverse proxy runs behind the firewall/router. The idea you are pointing 80/443 at the proxy with port forwarding once traffic hits your router.

    So if someone goes to service.domain.com

    You would have dynamic DNS telling domain.com the router is the IP.

    You would tell domain.com that service.domain.com exists as a cname or a record. You could also say *.domain.com is a cname. That would point any hosttname to your router.

    From here in the proxy you would say service.domain.com points to your services IP and port. Usually that is would be on the lan but in your case it would be through a tunnel.

    It is possible and probably more resource efficient to just put the proxy on the VPS and point your public domain traffic directly at the VPS IP.

    So you could say on the domain service.domain.com points to the VPS IP as an a record. Service2.domain.com points to the VPS IP as another a record.

    You would allow 80/443 on the VPS and create entries for the services

    Those would look like the service.domain.com pointing to localhost:port

    In your particular case I would just run the proxy on the public VPS the services are already on.

    Don't forget you can enable https certificates when you have them running. You can secure the management interface on its own service3.domain.com with the proxy if you need to.

    And op consider some blocklists for your vps firewall like spamhaus. It wouldn't hurt to setup fail2ban either.

  • You can do that or you can use a reverse proxy to expose your services without opening ports for every service. With a reverse proxy you would point port 80 and 443 to the reverse proxy once traffic hits your router/firewall. In the reverse proxy you would configure hostnames that point to the local service IP/ports. Reverse proxy servers like nginx proxy manager then allow you to setup https certificates for every service you expose. They also allow you to disable access to them through a single interface.

    I do this and have setup some blocklists on the opnsense firewall. Specifically you could setup the spamhaus blocklists to drop any traffic that originates from those ips. You can also use the Emerging Threats Blocklist. It has spamhaus and a few more integrated from dshield ect. These can be made into simple firewall rules.

    If you want to block entire country ips you can setup the GeoIP blocklist in opnsense. This requires a maxmind account but allows you to pick and choose countries.

    You can also setup the suricatta ips in opnsense to block detected traffic from daily updates lists. It's a bit more resource intensive from regular firewall rules but also far more advanced at detecting threats.

    I use both firewall lists and ips scanning both the wan and lan in promiscuous mode. This heavily defends your network in ways that most modern networks can't even take advantage.

    You want even more security you can setup unbound with DNS over TLS. You could even setup openvpn and route all your internal traffic through that to a VPN provider. Personally I prefer having individual systems connect to a VPN service.

    Anyway all this to say no you don't need a VPN static IP. You may prefer instead a domain name you can point to your systems. If you're worried about security here identify providers that allow crypto and don't care about identity. This is true for VPN providers as well.

  • This is a journey that will likely fill you with knowledge. During that process what you consider "easy" will change.

    So the answer right now for you is use what is interesting to you.

    Yes plenty ways to do the same thing in different ways. Imo though right now jump in and install something. Then play with it.

    Just remember modern CPUs can host many services from a single box. How they do that can vary.

  • Probably these directories...

    /tmp /var/tmp /var/log

    Two are easy to migrate to tmpfs if you are trying to reduce disk writes. Logs can be a little tricky because of the permissions. It is worth getting it right if you are concerned about all those little writes on an SSD. Especially if you have plenty of memory.

    This is filesystem agnostic btw so the procedure can apply to other filesystems on Linux operating systems.

  • I have to admit I was doing the same but with the greek versions. Though I liked to throw in hydra's and the like.

  • That's somewhat true. However, the hardware support in bsd especially around video has been blah. If you are interesting in playing with zfs on linux I would recommend proxmox. That particular os is one of the few that allows you to install on a zfs rpool from the installer. Proxmox is basically a debian kernel that's been modified a bit more for virtualization. One of the mods made was including zfs support from the installer.

    Depending on what you get if you go the prox route you could still install bsd in a vm and play with filesystem. You may even find some other methods to get jellyfin the way you like it with lxc, vm, or docker.

    I started out on various operating systems and settled on debian for a long time. The only reason I use prox is the web interface is nice for management and the native zfs support. I change things from time to time and snapshots have saved me from myself.

  • Hardware support can be a bit of an issue with bsd in my experience. But if you're asking for hardware it doesn't take as much as you may think for jellyfin.

    It can transcode just fine with Intel quic sync.

    So basically any moden Intel CPU or slightly older.

    What you need to consider more is storage space for your system and if your system will do more than just Jellyfin.

    I would recommend a bare bones server from super micro. Something you could throw in a few SSDs.

    If you are not too stuck on bsd maybe have a look at Debian or proxmox. Either way I would recommend docker-ce. Mostly because this particular jellyfin instance is very well maintained.

    https://fleet.linuxserver.io/image?name=linuxserver/jellyfin

  • So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I'm more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to "passthru" as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

    I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn't import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn't directly managing the disks. It was managing virtual disks.

    Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

    If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

    I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

    Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

    As for the error it's typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

    My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.


    Repost to op as op claims his comments are being purged

  • Well op look at it this way...

    A single 50mb nginx docker image can be used multiple times for multiple docker containers.

  • Hmm. If you are going to have proxmox managing zfs anyway then why not just create datasets and share them directly from the hypervisor?

    You can do that in terminal but if you prefer a gui you can install cockpit on the hypervisor with the zfs plugin. It would create a separate web gui on another port. Making it easy to create, manage, and share datasets as you desire.

    It will save resources and simplify zfs management operations if you are interested in such a method.

  • What is the underlying filesystem of the proxmox hypervisor and how did you pass storage into the omv vm? Also, is anything else accessing this storage?

    I ask because...

    The "file lock ESTALE" error in the context of NFS indicates that the file lock has become "stale." This occurs when a process is attempting to access a file that is locked by another process, but the lock information has expired or become invalid. This can happen due to various reasons such as network interruptions, server reboots, or changes in file system state.

  • My npm has web sockets enabled and blocking common exploits.

    Just checked syncthing and it's set to 0.0.0.0:8384 internally but that shouldn't matter if you changed the port.

    When Syncthing is set to listen on 0.0.0.0, it means it's listening on all available network interfaces on the device. This allows it to accept connections from any IP address on the network, rather than just the local interface. Essentially, it makes Syncthing accessible from any device within the network.

    Just make sure you open those firewall ports on the server syncthing is running on.

    Btw the syncthing protocol utilizes port 22000 tcp and udp. Udp utilizing a type of quic if you let it.

    So it's a good idea to allow udp and tcp on 22000 if you have a firewall configured on the syncthing server.

    Edit

    Wording for firewall ports and the purpose of 0.0.0.0

  • If you are somewhat comfortable with the cli you could install proxmox as zfs then create datasets off the pool to do whatever you want. If you wanted a nicer gui to manage zfs you could also install cockpit on the proxmox hypervisor directly along with the zfs plugin to manage the datasets and share them a bit easier. Obviously you could do all of that from the command line too.

    Personally I use proxmox now where before I made use of Debian. The only reason I switched was it made vm/lxc management easy. As for truenas it's also basically Debian with a different gui. These days I'm more focused on optimization in my home lab journey. I hope you enjoy the experience however you begin and whatever applications you start with.

  • I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.

    Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.

    For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.

    Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.

    My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn't hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.

    OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.

    (It's important to get this number right for performance because it can't be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)

    Only do zfs if you can bypass all the raid functions.

    I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.

    From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.

    If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.

    Have fun creating lxcs and vms for all the services you could want.