Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TE
Posts
0
Comments
71
Joined
2 yr. ago

  • Would love a new Steam Machine and could actually be good this time. Proton didn't exist when they released the original Steam Machines which limited you to linux ports of games. I had bought two but wiped & did clean installs of Windows 7 so we could play all the games wanted to.

    Before Proton, gaming on linux relied on native ports or WINE. Native ports were rare & not always better. WINE took some learning to make work well but I dunno, never got any good at it.

  • The OS was also very limited with focus on Linux ports of games which there were not very many at the time. Proton wasn't a thing yet. I bought two of them, one for myself and one for my brother. I tested it out & it was neat but wiped both to do clean installs of Windows 7 so could play the games we wanted.

  • For the sata drive behavior it's probably finishing the writes from buffer. I like to use the iotop utility to watch storage IO activity on my systems. Could try running it on both systems to get a better picture of what's going on.

    I currently use NFS and CIFS but have used iSCSI in the past. I like the simplicity of NFS & CIFS and they meet my uses. iSCSI has it's strengths as others have stated.

    • /var/lib/mysql - I would say iSCSI in it's own image+lun. Should get lower latency as well as higher transfer rates compared to NFS for DB but it depends on the kinds & how much usage.
    • virtual machine images - I prefer NFS mounts for same reason, easier to work with the files directly. If you do go with iSCSI you can have different disk images for different kinds of VMs. Should be able to use both at same time on most hypervisors if you want to play with them too.
    • lots of small files - NFS should work without issue
  • I started with Slackware around 1997 because I needed a free C compiler plus all I had were junk, hand-me-down computers. Stopped programming & using linux around 2000 and had switched back to Windows on a newly built, decent computer. From about 2000 until about 2016 I rarely used linux besides a couple routers. Raspberry pi 3 came out with built-in wifi & my dislike of Windows 10 got me back into linux for more use cases. Valve's work on proton finally made it so I could switch to linux for most gaming & my Windows usage dropped to almost nothing. Currently using Manjaro on primary desktop and Fedora 38 on tablet with mix of distros in LXC & VMs on mini-PC w/ Proxmox VE & Synology NAS. SteamVR on linux been getting decent amount of work on it lately so once it gets stable I'll have one less reason to need Windows.

  • Nothing to stop running podman containers with full root access by creating & running them as root, you run them as whatever user you want. I've done it to troubleshoot containers on more than one occasion, usually when I want to play with VPN or privileged ports but too lazy to do it proper. The end goal for a lot of ppl, including myself, is to run as many things as non-root as possible. Why? Best practices around security have you give a service the minimal access & resources it needs to do it's tasks. Some people allow traffic from the internet to their containers & they probably feel a little bit safer running those programs as non-root since it can create an extra layer that may need to be broken to fully compromise a system.

  • Sounds like the drives are combined with RAID 5. Could be hardware RAID card or software RAID as part of the BIOS. Server model number can be used to search for administrator manual and may have more info there. If it's hardware RAID card then try to find the model number & search for it's manual. If it's software raid at the BIOS level then motherboard/server manual will cover it. Should be some messages and prompts during boot related to it. Terms to look for 'RAID', 'storage controller', 'Perc', 'LSI'.

  • Another benefit to LXC is you can map devices, including GPU, to multiple LXC while keeping them accessible to the host. For my home setup I currently have 3 LXC with access to the iGPU, 1 for jellyfin+caddy via podman nested, 1 for moonfire-nvr via podman nested, and been trying to use 1 to figure out hardware transcoding with owncast through multiple install methods but no luck so far. I've also been playing with mapping rtl-sdr v3 devices, zigbee stick, zwave stick, and coral usb for a variety of projects lately.

    edit: I forgot to answer the question and went straight to ranting, lol. LXC is like a bare-metal VM. You can install & run multiple things on them like a normal VM including podman or docker.

  • On proxmox you should be able to share any GPU (integrated or dedicated) to multiple LXCs while keeping it accessible to the host. I use intel integrated GPU in LXC for plex, jellyfin, and one with just ffmpeg I use to convert videos occasionally. I used these instructions as starting point/base when I set mine up on proxmox v7.x, https://forum.proxmox.com/threads/plex-hw-transcoding-lxc-and-jasper-lake-igpu-passthru.116163/

    I had looked at instructions to assign the GPU to a specific VM but it looked like way too much work and people were saying it didn't always work for the 11th gen iGPUs. Thankfully I ran across the sharing method and it's been running stable since.

  • My info may be outdated as I last had G Fiber about a year ago but have moved out of their service area so stuck with AT&T fiber along with their horrible modem+router :(

    When I first got the 2G down/1G up G Fiber service there was no bridge mode & had to use their provided device as modem+router+wifi. They updated it to add in a bridge mode option but I never tested it. I had dropped back down to 1G down & up before that option was available.

    edit: forgot to mention I had read some people had luck using Unifi Dream Machine to plug in G Fiber's 2.5G SFP looking module but I wasn't willing to spend any more money on anything Unifi besides WiFi APs.

  • My last NAS & ESXi box were 12 years old when I retired them. I had thought about sticking with used enterprise gear but wanted a break to be a little lazy for a couple years. Storage is on Synology (DS1520+) and Proxmox runs on Asus PN63-S1 mini PC. Hyper Backup was primary reason I chose Synology (always been lazy about off-site backups) and docker feature has come in handy for things like secondary pihole & DNS. LXC with docker or podman have been able to cover majority of my needs in proxmox but still have Home Assistant & Unifi Network Controller on their own VMs. Home Assistant I have zero plans to move. Unifi I eventually plan to move over to docker but it works for now, albeit on an older version. Really need to up my documentation & diagram game, it's all a huge mess, lol.

    Future plans would love to have closet full of used enterprise servers running proxmox with all flash ceph storage backend then can do whatever NAS distro I want as a VM. My budget is focused elsewhere for next year or two unfortunately so gonna be awhile unless something breaks.

    Always like to hear about other setups as I am constantly re-thinking my own.

  • I have public wildcard DNS entry (*.REMOVEDDOMAIN.com) on Cloudflare on my primary domain that resolves to 192.168.10.120 (my Caddy host)

    Caddyfile

     
        
    {
      email EMAILREMOVED@gmail.com
      acme_dns cloudflare TOKENGOESHERE
    }
    
    portal.REMOVEDDOMAIN.com {
      reverse_proxy 127.0.0.1:8081
    }
    
    speedtest.REMOVEDDOMAIN.com {
      reverse_proxy 192.168.10.125:8181
    }
    
      
  • You can self-host ACME server which lets you use certbot to do automatic renewals even for private, internal only certs. I don't know if it would work with NPM. I plan to test that out at some point in the future but my current setup works & I'm not ready to break it for a maybe yet :P

  • I use Caddy with the Cloudflare DNS plugin for Let's Encrypt DNS based challenges, should work for wildcard too but only have a couple subdomains so never tried to do that. My DNS entries are public but point at private IP ranges, e.g. nc.PRIVATEDOMAIN.COM resolves to 192.168.1.20 where Caddy sends the traffic to my Nextcloud docker

  • Free and centrally managed, not aware of any but definitely interested in something like that too.

    My current setup has Proxmox backing up all LXC and VMs to Synology NAS then the Synology NAS backing up to Backblaze. Both run nightly. Using the built-in backup utility on Proxmox VE pointed at CIFS share on the Synology NAS.

    Synology does have a software backup client available but I have never used it. For my desktops & laptops, they are easily reinstalled+reconfigured, I just make sure the data I care about is stored or synchronized to my NAS or the cloud. Nextcloud for files, Firefox sync for history+bookmarks, bitwarden client+vaultwarden for passwords, chezmoi for some dotfiles on some linux systems.