c/technology mods are inactive
PriorProject @ PriorProject @lemmy.world Posts 9Comments 266Joined 2 yr. ago
If no one responds here, post in !moderators@lemmy.world (also read the mod guidelines there if you're a new mod) or email info@lemmy.world. You need the help of an admin to make this transition, but if the only mod is banned it should be a pretty simple request.
So I have a question, what can I do to prevent that from happening? Apart from hosting everything on my own hardware of course, for now I prefer to use VPS for different reasons.
Others have mentioned that client-caching can act as a read-only stopgap while you restore Vaultwarden.
But otherwise the solution is backup/restore. If you run Vaultwarden in docker or podman container using volumes to hold state... then you know that as long as you can restart Vaultwarden without losing data that you also know exactly what data needs to be backed up and what needs to be done to restore it. Set up a nightly cron job somewhere (your laptop is fine enough if you don't have somewhere better) to shut down Vaultwarden, rsync it's volume dirs, and start it up again. If you VPS explodes, copy these directories to a new VPS at the same DNS name and restart Vaultwarden using the same podman or docker-compose setup.
All that said, keeypass+filesync is a great solution as well. The reason I moved to Vaultwarden was so I could share passwords with others in a controlled way. For single-user, I prefer how keypass folders work and keepass generally has better organization features... I'd still be using it for only myself.
My take echoes this. If one puts any stock in streamer recommendations, Baalorlord who has at various times held spire world record winstreaks, has recently cited Monster Train as his current favorite spirelike (other than spire itself), and also cited Griftlands as a playthrough a highlight.
Baalor probably doesn't have an opinion on Inscryption as he tends to avoid things with even a slight horror theme. I enjoyed what I played of Inscryption a lot, but very little about playing it evoked the vibe of playing spire. Monster Train is quite adjacent though, the mechanics are different enough to feel fresh but it slots into the same gameplay mood for me whereas Inscryption is just a different (and still very good) thing.
Neither has the tight balance of Spire or feels quite as deep strategically to me (though in all honesty I'm probably not a strong enough player to be trusted in this regard), but both are fun.
And just today with a comment by a world admin! Hopefully they'll get it sorted soon.
That's an interesting report but it's possible to "work" at different latencies. And unless you have specialized audio capture/playback hardware and have done some tuning and testing to determine the lowest stable latency that your system is capable of achieving... "works" for you is likely to mean something very different than it does to someone who does a lot of music production.
It remains an interesting question to some users whether Wayland changes the minimum stable latency relative to X and if so whether it does so for better or worse.
I'd consider asking in a Linux audio or music production community (I'm not aware of any on Lemmy that are big enough to have a likely answer though). If music production is a primary use case and audio latency matters to you, almost no general users are going to be able to comment on the difference between X and Wayland from a latency perspective. There may not be a difference, but there might and you won't be likely to learn about it outside of an audio-focused discussion.
Yeah, snapshots sent to a separate and often remote pool is an extremely common backup strategy for folks who have long-term settled on ZFS. There's very nice tooling for this that presents a more traditional schedule/retention based interface to save you scripting snapshots and sends directly.
- Sanoid is an old standby in that space.
- Zrepl is getting a lot of traction lately and seems to be an up-and-coming option.
- I use pyznap, but I don't recommend it to others as as the maintainer is on a multi-year hiatus which makes it undermaintained. It works great, but isn't getting active development which makes it a poor bet in a crowded space with many great options. I plan to eval Zrepl when I get around to it.
I don't know if what you're suggesting is possible, which as I read it is to split your "live" raid-1 in half and use one drive to rebuild the "live" pool and the other drive to rebuild the "backups" pool. It might be, but I can't think of any advantage to that approach and it's not something I would have thought to attempt.
I'd do one of:
- Ship the data over the network using ZFS send or something like syncoid/sanoid (which use ZFS send under the hood). It might be slow, but is that an issue? Waiting a week for the initial sync might be fine.
- But syncing by sneakernet is a good strategy too, and can be faster if your backup site is close or your connectivity is slow. In this case, I'd build the backup pool at the live site... ideally in an external drive bay... but one could plug it in internally as well. Then sync them with a local ZFS send, export the backup pool, detach and transport the backup pool to the backup site, them reattach the backup pool at the backup site and import it. Et Voila, the backup pool is running at the remote site fully populated with data and subsequent ZFS sends will be incremental.
Splitting and rebuilding your live pool might be possible, but I can imagine a lot of that might go wrong and I can't see any reason to do it that way over export/import.
It may seem kinda stupid to consider that an accomplishment, but I feel quite genuinely proud of myself for actually succeeding at this instead of just throwing in the towel...
Way to go. I've been at this a decent while and do some pretty esoteric stuff at work and at home... but this loop of feeling stupid, doing the work, and feeling good about a success has been a constant throughout. I spent a week struggling to port some advanced container setups to podman a month or so ago, same feeling of pride when I got them humming.
It's not stupid to be proud of an accomplishment even if it's a fundamental one that's early in a bigger learning curve. Soak it in, then on to the next high. Good luck.
On October 12, Warhammer 40,000: Space Wolf and all of its DLC will be removed from the Steam store.
I don't know for sure in this case, but it's often that some license was time-limited. You might license the music for x-years, or get a license to distribute some third-party software library with your game. With the license time-period runs out the publisher either has to pay to renew the license or stop selling the game. In either case, people who bought get to keep what they have.
I asked them elsewhere in the thread and Connect doesn't have crossposting either, fwiw. I have no idea why they're posting in this thread, their answer has nothing to do with your question.
I have both Connect and Jerboa installed, they're both fine. Connect looks prettier, and the search is definitely better. I end up using Jerboa more out of the two.
When I want to cross-post from mobile I end up switching over to Lemmy's mobile web interface, which can be saved to your home screen as a progressive web app. Not a Jerboa-native solution, but I've tried a lot of the Android apps and I haven't seen any of them support a proper cross-post.
Does connect have crossposting? I don't see it listed in the triple-dot menu for a post.
You connect to Headscale using the tailscale clients, and configuration is exactly the same irrespective of which control server you use... with the exception of having to configure the custom server url with Headscale (which requires navigating some hoops and poor docs for mobile/windows clients).
But to my knowledge there are no client-side configs related to NAT traversal (which is kind of the goal... to work seamlessly everywhere). The configs themselves on the headscale server aren't so bad either, but the networking concepts involved are extremely advanced, so debugging if anything goes sideways or validating that your server-side NAT traversal setup is working as expected can be a deep dive. With Tailscale, you know any problems are client-side and can focus your attention accordingly... which simplifies initial debugging quite a lot.
... only if you are in the US and get an API key from NCMEC. They are very protective of who gets the keys and require a zoom call as well.
Do you have a source for these statements, because they directly contradict the Cloudflare product announcement at https://blog.cloudflare.com/the-csam-scanning-tool/ which states:
Beginning today, every Cloudflare customer can login to their dashboard and enable access to the CSAM Scanning Tool.
... and shows a screenshot of a config screen with no field for an API key. Some CSAM scanners do have fairly limited access, but Cloudflare's appears to be broadly available.
Yeah, misread the pricing page. Fixed the post, thanks for the correction.
I use Headscale, but Tailscale is a great service and what I generally recommend to strangers who want to approximate my setup. The tradeoffs are pretty straightforward:
- Tailscale is going to have better uptime than any single-machine Headscale setup, though not better uptime than the single-machine services I use it to access... so not a big deal to me either way.
- Tailscale doesn't require you to wrestle with certs or the networking setup required to do NAT traversal. And they do it well, you don't have to wonder whether you've screwed something up that's degrading NAT traversal only in certain conditions. It just works. That said, I've been through the wringer already on these topics so Headscale is not painful for me.
- Headscale is self-hosted, for better and worse.
- In the default config (and in any reasonable user-friendly, non professional config), Tailscale can inject a node into your network. They don't and won't. They can't sniff your traffic without adding a node to your tailnet. But they do have the technical capability to join a node to your tailnet without your consent... their policy to not do that protects you... but their technology doesn't. This isn't some surveillance power grab though, it's a risk that's essential to the service they provide... which is determining what nodes can join your tailnet. IMO, the tailscale security architecture is strong. I'd have no qualms about trusting them with my network.
- Beyond 3
devicesusers, Tailscale costs money... about $6 US in that geography. It's a pretty reasonable cost for the service, and proportional in the grand scheme of what most self-hosters spend on their setups annually. IMO, it's good value and I wouldn't feel bad paying it.
Tailscale is great, and there's no compelling reason that should prevent most self-hosters that want it from using it. I use Headscale because I can and I'm comfortable doing so... But they're both awesome options.
I replied to the parent comment here to say that governments HAVE set up CSAM detection services. I linked a review of them in my original comment.
- They've set them up through commercial partnerships with technology companies... but that's no accident. CSAM fighting orgs don't have the tech reach of a major tech company so they ask for help there.
- Those partnerships are limited to major/successful orgs... which makes it hard to participate as an OSS dev. But again, that's on-purpose as the same access that would empower OSS devs to improve detection would enable CSAM producers to improve evasion. Secrecy is useful in this race, even if it has a high cost.
Plus with the flurry of hugely privacy-invading or anti-encryption legislation that shows up every few months under the guise of "protecting the children online", it seems like that should be a top priority for them, right?! Right...?
This seems like inflammatory bait but I'll bite once.
- Improving CSAM detection is absolutely a top priority of these orgs, and in the last 10y the scope and reach of the detection tools they've created with partners has expanded in reach from scanning zero images to scanning hundreds of millions or billions of images annually. It's a fairly massive success story even if it's nowhere near perfect.
- Building global internet infrastructure to scan all/most images posted to the internet is itself hugely privacy invading even if it's for a good cause. Nothing prevents law-makers from coopting such infrastructure for less noble goals once it's been created. Lemmy is in desperate need of help here, and CSAM detection tools are necessary in some form, but they are also very much scary scary privacy invading tools that are subject to "think of the children" abuse.
I'm not sure I follow the suggestion.
- NCMEC, the US-based organization tasked with fighting CSAM, has already partnered with a list of groups to develop CSAM detection tools. I've already linked to an overview of the resulting toolsets in my original comment.
- The datasets used to develop these tools are private, but that's not an oversight. The datasets are... well... full of CSAM. Distributing them openly and without restriction would be contrary to NCMEC's mission and to US law, so they limit the downside by partnering only with serious/capable partners who are able to commit to investing significant resources to developing and long-term maintaining detection tools, and who can sign onerous legal paperwork promising to handle appropriately the access they must be given to otherwise illegal material to do so.
- CSAM detection tools are necessarily a cat and mouse game of CSAM producers attempting to evade detection vs detection experts trying to improve detection. In such a race, secrecy is a useful... if costly... tool. But as a result, NCMEC requires a certain amount of secrecy from their partners about how the detection tools work and who can run them in what circumstances. The goal of this secrecy is to prevent CSAM producers from developing test suites that allow them to repeatedly test image manipulation strategies that retain visual fidelity but thwart detection techniques.
All of which is to say...
... seems like law enforcement would have such a data set and seems they should of course allow tools to be trained on it. seems but who knows? might be worth finding out.)
Law enforcement DOES have datasets, and DO allow tools to be trained on them... I've linked the resulting tools. They do NOT allow randos direct access to the data or tools, which is a necessary precaution to prevent attackers from winning the circumvention race. A Red Hat or Mozilla scale organization might be able to partner with NCMEC or another organization to become a detection tooling partner, but db0, sunaurus, or the Lemmy devs likely cannot without the support of a large technology org with a proven track record or delivering and maintaining successful/impactful technology products. This has the big downside of making a true open-source detection tool more or less impossible... but that's a well-understood tradeoff that CSAM-fighting orgs are not likely to change as the same access that would empower OSS devs would empower CSAM producers. I'm not sure there's anything more to find out in this regard.
I haven't been moderated a lot, but I believe the user gets no indication they've been moderated unless the mod replies to them or DMs them to tell them.
I agree that auto-notificiation would be beneficial. Despite the easy availability of the modlog, this kind of question is pretty common. Not everyone knows it exists or how to search it.
The admins generally won't intervene until you've made a good faith attempt to coordinate directly with the mods and documented a clear case that they're unresponsive or malicious.