Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)DR
Posts
0
Comments
43
Joined
2 yr. ago

  • announced

    What announcement? There's been a new Personal Plus plan around for several months already - introduced without much fanfare, and simply brings the user count from 3 to 6 for a fixed small fee. Presumably this is due to feedback from personal users wanting to contribute something other than nothing.

    Where do you see the free Personal plan has changed at all?

  • custom domain

    From what I gather, this refers to the email address you sign up with.

    If you use something like a non-gmail email address when signing up, it starts you off on the business plan with a trial (which you can instantly change to free). (Note: they're gonna change this auto-detection thing with shared domains soon due to a security hole.)

    I believe you can still use a custom domain (instead of the randomised *.ts.net provided one) with DNS lookups in your tailnet, on the personal (free) plan.

  • Do ignore me then, I assumed you might know the reference and only I mean't it in good humour. :) (Without spoiling anything - in the unlikely event you might some day watch it - Mr Milchick is a character that uses 'big words'. Your choice of words struck a chord.) I will say though, you're seriously missing out. The cinematography alone is brilliant and the acting exceptional.

  • You're limiting yourself somewhat if you're not able to plug in multiple drives at the same time. Otherwise, I might suggest mergerfs for basic JBOD. You won't be able to use a single ZFS to avoid bit rot - only detect it. SnapRAID - ideal for offline setups - would be the next step up if you could dedicate one of your drives to parity.

    In your position, I'd do Duplicacy backups split/spanned over multiple backup drives (however you connect them).

    It has a pretty cool Erasure Coding feature that protects individual chunks from bit rot and possibly even bad sectors, plus the whole database-less architecture makes it very robust. De-duplication, high levels of compress, and encryption. Plus you can keep historic snapshots, so you can avoid the risk of accidentally sync'ing ransomware over the top.

    Edit: the CLI is free for personal use, and is source-available. Written in Go and extremely performant.

  • Crazy. Though I suspect the copy protection is done by the third party Termly, which hosts the policy.

    To select the text (in Firefox), first right-click This Frame > Show Only This Frame. Press F12, expand

    <head>

    , find the second

    <style>

    block, right click it and Delete Note.

  • Multiple backups may be kept.

    Nice work, but if I may suggest - it lacks hardlink support, so's quite wasteful in terms of disk space - the number of 'tags' (snapshots) will be extremely limited.

    At least two robust solutions that use rsync+hardlinks already exist: rsnapshot.org and dirvish.org (both written in perl). There's definitely room for backup tools that produce plain copies, instead of packed chunk data like restic and Duplicacy, and a python or even bash-based tool might be nice, so keep at it.

    However, I liken backup software to encryption - extreme care must be taken when rolling and using your own. Whatever tool you use, test test test the backups. :)

  • Still using Private Internet Access (PIA).

    Honestly, dunno why they've fallen out of fashion due to the FUD about being owned by an unsavoury parent company, but the most important matter to me is if they keep logs, which they don't. One of the few VPN companies tested on this, in court, and in a recent audit. Plus still extremely cheap (if you go for 3yr+3mo).

    Port forwarding works with with this docker NAS stack. Doesn't use gluetun, but there's a specialised docker-wireguard-pia container as part of the stack, with a script that handles port changes. Been flawless.

  • There's no point doing anything fancy like that - wireguard over Tailscale is pretty pointless, as Tailscale is literally wireguard with NAT traversal and authentication bolted on. Unless you enable subnetting, it can't get more secure than that.

    And even if you do enable subnetting (which you might wanna do if you need access to absolutely everything), you can use Tailscale ACLs to keep tighter control - say, from specific (tagged) devices.

  • If you can't find the original .torrent, one way to find it again is to use BiglyBT client's Swarm Discoveries feature to search its DHT for the exact file size in bytes (of the main media file within). You may be able to find one or more torrents and simultaneous seed them with Swarm Merging too.

    As well as the force recheck method others have mentioned, you can also tell BiglyBT to use existing files elsewhere when adding the torrent, which can copy the data onto there for you without risking overwriting the original files.

  • 100% this. OP, whatever solution you come up with, strongly consider disentangling your backup 'storage' from the platform or software, so you're not 'locked in'.

    IMO, you want to have something universal, that works with both local and 'cloud' (ideally off-site on a own/family/friend's NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.

  • While you can do command line stuff with CloneZilla, I think what they're referring to is the TEXT-based guided user interface, which doesn't seem to differ much at all to the Rescuezilla GUI, which only looks marginally prettier. However, there's a few other useful tools in there, and a desktop environments, so it's still a bit nicer to use.

  • Yep, I guess it depends on how much data of interest is on the drive. You can hook it up to dmde with a ddrescue/OpenSuperClone-mounted drive, which can let you index the filesystem while it streams content to the backup image. It reads and remembers sectors already copied, and you can target specific files/folders so you don't have to touch most of the drive.