[Question] Security considerations when self-hosting Nextcloud
cyberwolfie @ cyberwolfie @lemmy.ml Posts 35Comments 422Joined 2 yr. ago
Oh boy, don't like that sound of that...
That is correct. I used WebFAI to install, the automatic installer provided by Tuxedo Computers (https://www.tuxedocomputers.com/en/TUXEDO-WebFAI.tuxedo).
There are three partition on my primary SSD. One that is mounted to /boot
(1G), one mounted to /boot/efi
(512M) and my main partition that is encrypted. I'm guessing it is one of the two boot partitions that don't have enough space to update initramfs-tools
?
Thanks, I'll look into setting that up. So far I've only ever used ssh keys for GitHub.
This is one of those areas that often has me confused... For now, the DNS entry with Cloudflare is set to 'DNS Only'. That is perhaps a mistake on my part, and I should enable the proxy? Right now I can't remember the reasoning for why I set it up like this.
Originally I wanted to set up Nginx Reverse Proxy to serve other services than Nextcloud on the same server on different ports. That was the way that I found that was easily manageable at the time, and like the AIO container is set up now, accessing the IP address of my server automatically routes to Nextcloud, even if I had another service running. I could maybe configure Apache to do the same job as I want Nginx to do? At the time, I opted to get another VPS dedicated for other, smaller services instead as a temporary solution, that over time turned permanent. However, this will be important to me when/if I start hosting this locally instead, as I would want my server to host other services as well.
I managed to kill the process, and the lock was released with the operation, allowing me to upgrade my system. After this, I moved all external sources out of sources.list.d and upgraded my system, and continued to add them back one by one to see if any of them triggered an error. None of them did.
When I reboot the system, the problem is back. I've added some more details in the original post.
As added as an edit in the original post, I managed to update my system by killing the process, did an upgrade with only the original sources.list and then added back the third-party repos one-by-one. None of the third party repos caused any problem when adding them back. Problem persists upon reboot. There seems to be an issue with updating initramfs
at the end of every sudo apt upgrade
, but that shouldn't run on boot, so I don't think that can be the cause?
Ok, so now I have managed to update my system by killing the process, which releases the lock, and then going on to do normal sudo apt update
and sudo apt upgrade
. For the sake of troubleshooting, I tried to add back my third-party repos one by one, and none of them caused any problem.
However, when rebooting, the same problem happens again. In system settings, auto update was already set to "Manually" and offline updates is unchecked. I have not made any modifications to this. I did not have software-properties-kde
installed, and it was also not available by running sudo apt install software-properties-kde
. It suggested only software-properties-qt
instead. So I could not check those settings, but in the 20auto-upgrades
file, Update-Package-Lists
and Unattended-Upgrades
are the only lines present, and both are set to 1.
In addition, everytime I ran sudo apt upgrade
, at the end some update related to initramfs
fails. My disk is encrypted using cryptsetup, and as I've come to understand, I should be very careful doing anything related to initramfs
when that is the case. Here is the output:
Processing triggers for initramfs-tools (0.140ubuntu13.2) ... update-initramfs: Generating /boot/initrd.img-6.2.0-10018-tuxedo I: The initramfs will attempt to resume from /dev/dm-2 I: (/dev/mapper/system-swap) I: Set the RESUME variable to override this. zstd: error 25 : Write error : No space left on device (cannot write compressed block) E: mkinitramfs failure zstd -q -1 -T0 25 update-initramfs: failed for /boot/initrd.img-6.2.0-10018-tuxedo with 1. dpkg: error processing package initramfs-tools (--configure): installed initramfs-tools package post-installation script subprocess returned error exit status 1 Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1)
Since this is buried quite a long way down into a thread, I will now also update the main post to include this update.
Would there not be a risk of corrupting some of the repo files and dependencies lists by just killing it?
I have checked dmesg after your suggestion, but I did not see anything that tipped me off to what might be wrong. Is there anything in particular I should be looking for?
My sources.list
file is pretty clean, with 3 https-sources for the Tuxedo OS mirror repos of the original Ubuntu ones. In the sources.list.d
directory, there are some that have been added by me, such as for Signal, Librewolf and VS Codium. In total there are 11 files in here, each with one additional source. All except one is https
, and the last one is mirror+file
. In the process tree for apt-get, there are 13 subprocesses, while there are 14 sources in total (11+3). Could it be that it hangs on the last one here?
EDIT: Would this be a viable way to troubleshoot? I backup the sources and just replace them with a blank sources.list
file and an empty sources.list.d
directory. If that works, I add the repos back one at the time and see which one that fails. Or could I run into unintended trouble if I remove the main repos, even for a short time? I would think that it just wouldn't find anything and just be happy there are no updates.
As far as I can tell, these are the methods apt uses to get information from the repositories that is listed within sources.list
and within the sources.list.d
directory. The number of subprocesses almost matches the number of sources there - in reality there are 14 listed, not 13 as is seen in the ps
output. I can find one entry that starts with mirror+file
, but otherwise there are 13 https
entries. So that last line I am not sure what is doing.
Anyways, it seems to me that it gets stuck somewhere updating the repositories list. Right now, I'm stuck with three questions:
- I'm still unsure as to whether it would be safe to kill the process, as I could imagine that having a corrupted depencies files could be really bad?
- Also, would killing the process automatically release the lock, or would I need to remove that myself after?
- Is there any reason to believe that this would even work, seeing as this happens everytime on boot. I imagine that if I kill the process, delete the lock and try to run
sudo apt update
I just end up the same place again.
Yeah, it happens straight after boot, but it never resolves itself. And the process exists after reboot as well, so it seems that it starts running apt-get update and gets stuck / hangs.
It's showing up in the list with a bunch of sub-processes that looks like https-requests (I posted the output below). Which logs could I check to learn more about what's happened?
When you say remove the file, do you mean the lock file?
It persists even after several hours. Never happened before, and I've been using the system for about 6 months now. I posted the output from the process tree below, but I'm not sure if that makes anything any clearer. It's as if it is not receiving any data and just waiting for the servers to respond?
Here is the output from running that command:
root 1635 0.0 0.0 22208 8928 ? S aug.06 0:00 \_ apt-get update _apt 1654 0.0 0.0 27528 9600 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1655 0.0 0.0 27528 9600 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1656 0.0 0.0 27528 9600 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1657 0.0 0.0 27528 9760 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1658 0.0 0.0 27528 9760 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1659 0.0 0.0 27528 9760 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1660 0.0 0.0 27528 9760 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1661 0.0 0.0 27528 9600 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1662 0.0 0.0 20908 6880 ? S aug.06 0:00 \_ /usr/lib/apt/methods/mirror+file _apt 1663 0.0 0.0 27528 9760 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1664 0.0 0.0 27528 9920 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1667 0.0 0.0 27532 10080 ? S aug.06 0:00 \_ /usr/lib/apt/methods/https _apt 1669 0.0 0.0 20864 7680 ? S aug.06 0:00 \_ /usr/lib/apt/methods/file
For now, Mozilla's official stance is to oppose this proposal: https://github.com/mozilla/standards-positions/issues/852#issuecomment-1648820747
I wish that this kind of thing would generate enough outrage to increase Firefox' market share considerably (from the <3% it is today), and in that way deter websites from adopting it since they would block a larger share of users. Unfortunately, I think that might be too naive of me...
I'm do not experience any of these improper working sites. My daily driver is Librewolf, where this sometimes occur, but when it does, I just switch to vanilla Firefox, and everything is fine.
My calendaring needs might be less restrictive than yours, but Proton offers a nice calendar that from what I understand offers at least some integration with their e-mail client. Have you checked it out?
I use Nextcloud self-maintained on a VPS myself for all my calendaring needs, which is basically keeping track of appointments, syncing via CalDAV to my phone, as well as sharing some sub-calendars with other people. Setting up a Nextcloud-server is admittedly a bit more hassle than just signing up for a service, but also here there are options of making it a bit easier than hosting yourself.
I find Google Maps by far the hardest service to rid myself off, followed by Gmail (the time it takes!!! Been using Proton for two years, still not completely rid of my Gmail-account). I'm slowly getting used to using OSM-based map services more and more.
This is perfect! I've been meaning to get into contributing to OSM in my area, and this makes that very easy. I will be heading out once the rain stops to test this.
I'm on CalyxOS and I have to enable location services to search for my Xiaomi Smart Band 7 through Gadgetbridge.
I use a Xiaomi Smart Band 7 and pair it with Gadgetbridge, and it works fine for my purposes, which is HR-monitoring during the day, sleep and workout sessions. I rarely interface with the watch itself (which is by design), so if you want more functionality out of your watch, then this might be a little on the light side feature wise. I tend to keep Bluetooth off, so I connect to it maybe once a day to sync data with Gadgetbridge, which I again export for analysis. A bit clunky to connect - I have to search for it first in the Gadgetbridge app, and only when it has found it can I attempt to reconnect. Maybe this is easily fixable, but I have not bothered to do it because I only sync once a day.
You do need to obtain a key first though, which requires a login to the Xiaomi servers. I used a throwaway e-mail for the registration. Gadgetbridge has no access to the internet.
Ah, I see. Hope for you that a fiber connection will be available in the not-too-distant future then. I would love to do this at home, but I'm going to need some serious study sessions to better understand home networking (and take appropriate action) before I start exposing services at home to the internet. I do wonder if I jumped onto this too fast, but I was just so incredibly fed up with relying on big tech monopolies for essential digital services...
I guess my last question would be if you had an opinion on whether enabling proxy in Cloudflare is a no-brainer or not?