Skip Navigation

User banner
Posts
34
Comments
2,746
Joined
2 yr. ago

  • Joplin: Sufficient but no callouts :(

    Can you give an example of those "callouts"? Joplin has many plugins, many you can find that in there.

    My only complaint about Joplin is that there's no production / real WebUI for it yet.

  • Too bad the UI sucks and it doesn't have a WebUI.

  • Well, this solves nothing. I don't really know what's going on with Thunderbird but it is looking like a piece of crap, the latest UI changes made it worse, a few months after the other revision that was actually much more visually pleasing. Is it that hard to look at what others do instead of adding random boxes everywhere?

    Anyways, the worst part is that right now Thunderbird wastes more RAM than RoundCube running inside a browser with the Calendars and Contacts plugins. Makes no sense.

  • Get a USB-C DAS (enclosure) for your disks, those use their own power supply. Since it is USB-C performance will be very good and stable and you'll be happy with it.

  • Well... If you’re running a modern version of Proxmox then you’re already running LXC containers so why not move to Incus that is made by the same people?

    Proxmox (...) They start off with stock Debian and work up from there which is the way many distros work.

    Proxmox has been using Ubuntu's kernel for a while now.

    Now, if Proxmox becomes toxic

    Proxmox is already toxic, it requires a payed license for the stable version and updates. Furthermore the Proxmox guys have been found to withhold important security updates from non-stable (not paying) users for weeks.

    My little company has a lot of VMware customers and I am rather busy moving them over. I picked Proxmox (Hyper-V? No thanks) about 18 months ago when the Broadcom thing came about and did my own home system first and then rather a lot of testing.

    If you're expecting the same type of reliably you've from VMware on Proxmox you're going to have a very hard time soon. I hope not, but I also know how Proxmox works.

    I run Promox since 2009 and until very recently, professionally, in datacenters, multiple clusters around 10-15 nodes each which means that I’ve been around for all wins and fails of Proxmox. I saw the raise and fall of OpenVZ, the subsequent and painful move to LXC and the SLES/RHEL compatibility issues.

    While Proxmox works most of the time and their payed support is decent I would never recommend it to anyone since Incus became a thing. The Promox PVE kernel has a lot of quirks, for starters it is build upon Ubuntu’s kernel – that is already a dumpster fire of hacks waiting for someone upstream to implement things properly so they can backport them and ditch their own implementations – and then it is a typically older version so mangled and twisted by the extra features garbage added on top.

    I got burned countless times by Proxmox’s kernel. Broken drivers, waiting months for fixes already available upstream or so they would fix their own bugs. As practice examples, at some point OpenVPN was broken under Proxmox’s kernel, the Realtek networking has probably been broken for more time than working. ZFS support was introduced only to bring kernel panics. Upgrading Proxmox is always a shot in the dark and half of the time you get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later.

    Proxmox’s startup is slow, slower than any other solution – it even includes management daemons that are there just there to ensure that other daemons are running. Most of the built-in daemons are so poorly written and tied together that they don’t even start with the system properly on the first try.

    Why keep dragging all of the Proxmox overhead and potencial issues, if you can run a clean shop with Incus, actually made by the same people who make LXC?

  • You may not want to depend on those cloud services and if you need something not static, doesn't cut it.

  • Why only email? Why not also a website? :)

    "self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot"

    Some people do it and to be fair a website is way simpler and less prone to issues than mail.

  • If you did you would know I wasn't looking for advice. You also knew that exposing stuff publicly was a prerequisite.

  • Your billion dollar corporations aren’t running dedicated hardware

    You said it, some banks are billion dollar corporations :)

  • That's a good setup with multiple IP, but still you've a single firewall that might be compromised somehow if someone get's access to the "public" machine. :)

  • You're on a scenario 2.B mostly, same as me. That's the most flexible yet secure design.

  • Wow hold your horses Edward Snowden!... but at the end of the day Qubes is just a XEN hypervisor with a cool UI.

  • What you're describing is scenario 2.

  • Sorry, I misread your first comment. I was thinking you said "VPS". :)

  • because you want to learn them or just think they’re neat, then please do! I suspect a lot of people with these types of home setups are doing it mostly for that reason

    That's an interesting take.

  • Are you sure? A big bank usually does... It's very common to see groups of physical machines + public cloud services that are more strictly controlled than others and serve different purposes. One group might be public apps, another internal apps and another HVDs (virtual desktops) for the employees.

  • Kinda Scenario 1 is the standard way: firewall at the perimeter with separately isolated networks for DMZ, LAN & Wifi

    What you're describing is close to scenario 1, but not purely scenario 1. It is a mix between public and private traffic on a single IP address and single firewall that a lot of people use because they can't have two separate public IP addresses running side by side on their connection.

    The advantage of that setup is that it greatly reduces the attack surface by NOT exposing your home network public IP to whatever you're hosting and by not relying on the same firewall for both. Even if your entire hosting stack gets hacked there's no way the hacker can get in your home network because they're two separate networks.

    The scenario one describes having 2 public IPs, a switch after the ISP ONT and one cable goes to the home firewall/router and another to the server (or another router / firewall). Much more isolated. It isn't a simple DMZ, it's literally the same as two different internet connections for each thing.