Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SA
SayCyberOnceMore @ Cyber @feddit.uk
Posts
18
Comments
571
Joined
2 yr. ago

  • Still the same, or has it solved itself?

    If it's lots of small files, rather than a few large ones? That'll be the file allocation table and / or journal...

    A few large files? Not sure... something's getting in the way.

  • Where are you copying to / from?

    Duplicating a folder on the same NAS on the same filesystem? Or copying over the network?

    For example, some devices have a really fast file transfer until a buffer files up and then it crawls.

    Rsync might not be the correct tool either if you're duplicating everything to an empty destination...?

  • Never had an issue with EXT4.

    Had a problem on a NAS where BTRFS was taking "too long" for systemD to check it, so just didn't mount it... bit of config tweaking and all is well again.

    I use EXT* and BTRFS where ever I can because I can manipulate it with standard tools (inc gparted).

    I have 1 LVM system which was interesting, but I wouldn't do it that way in the future (used to add drives on a media PC)

    And as for ZFS ... I'd say it's very similar to BTRFS, but just slightly too complex on Linux with all the licensing issues, etc. so I just can't be bothered with it.

    As a throw-away comment, I'd say ZFS is used by TrusNAS (not a problem, just sayin'...) and... that's about it??

    As to the OPs original question, I agree with the others here... something's not right there, but it's probably not the filesystem.

  • I will be looking at setting up a local package cache soon, but hadn't thought about putting the aur packages in the same one... nice.

    So, do you just build those packages anywhere and just copy to your repo?

  • Yeah, the problem was that some modules don't support become, so I just ran the whole thing that way.

    For git and aur, to drop sudo I found that I have to use ansible-become variables to override just that step.

    Live & learn

  • Thanks for the pointer, but no, I'm not SSHing as root. And PermitRootLogin no is configured, so all good there.

    Turns out I start the entire sequence as become, so I had to learn about changing users with ansible-become variables

    Still have a few bugs to work out, but thanks for getting me on track

  • Ah! Ok, I'll dig into that and have a look.

    I thought I was SSHing into the clients as a non-root user, but I guess that's where I'm going wrong.

    Yeah, looking at the /tmp/aur folder it creates, it's owner is root... hmmm.

    Thanks

  • Yep, Proxmox itself is very light on resources, so most is available for the VMs / containers.

    Just another point... I've had some issues with Dell BIOS not respecting the Power On after power loss settings - usually a BIOS upgrade solves that and 99% of Dells still have "just 1 more" update on the website...

    I'd also recommend installing Wake on LAN on that Pi too... then if you VPN in from outside you can SSH into the Pi and power on other things that "accidentally" got shutdown.

  • There's clearly a lot of negative towards the company, which I agree with, but I'm not reading enough positive support for the dev...

    It must be a bit daunting being on the frontline going through this

    I'd guess that anyone using the plugin could help them feel supported in these situations by contributing on their "Buy me a coffee" link...

    https://www.buymeacoffee.com/andre0512

  • Yep, now, I initially found the daily journal approach a bit strange, but I use this for work as much as personal stuff, so it actually helps...

    My suggestion to your usecase would be to keep a page per "thing" ie server / container / etc and then when you make a change you can just say (on that day's journal page):

    '' Setup a backup for [[Server X]] and it's going to [[NAS2]] (for example) ''

    Then, on either of those 2 pages you'll automatically see the link back to the journal page, so you'll know when you did it...

    I think you can disable the journal approach if it's not useful...

    But, the important part is, the files underlying the notes you're making are in plain text with the page name as the filename, whereas with Joplin you could never find the file...

    Also, if you modify the file (live) outside of Logseq, it copes with that and refreshes the content onscreen.

    And the links are all dynamic... renamed the NAS? Fine, Logseq will reindex all the pages for you...

  • Thanks, yes, I think active-active would be another magnitude harder... and would need database, history, etc on shared storage... over the top to jist ensure the lights stay on.

    And backups are essential for all use cases (and not just the built-in HA backup left on the device / VM / container that just failed!)

    Thanks

  • Be aware that some old laptops had weird combined chipsets that Linux just can't use... I tried putting Linux Mint on a friend's laptop for their kids to use and the networking (wifi and cable) just wouldn't work... it was something that only Win98 / WinXP could use (from memory).

    So just try anything in case you just need to ditch it - as someone else mentioned, treat it as a learning exercise.