100% compatible way to get 2FA on Jellyfin Tutorial.
PriorProject @ PriorProject @lemmy.world Posts 9Comments 266Joined 2 yr. ago
Fair enough, sound like you have a well considered use case for Kuma specifically. Good luck, I don't have much to offer on your OP question.
I'm mostly in the pro-written word camp myself, but I have sought out video tutorials in cases where written docs seem to assume something I don't know. When I'm learning something new, a written doc might have a 3-word throwaway clause like "... add a user and then...". But I've never added a user and don't know how. If it's niche open-source software with a small dev team, this may not be covered in the docs either. I'll go fishing for videos and just seeing that they go to a web-ui or config-file or whatever sets me on the path to figure out the rest myself.
That is to say, video content that shows someone doing a thing successfully often includes unspoken visual information that the author doesn't necessarily value or even realize is being communicated. But the need to do the thing successfully on-screen involves documenting many small/easy factoids that can easily trip someone inexperienced up for hours.
I'm as annoyed as anyone when I want reference material and find only videos, and I generally prefer written tutorials as well. But sometimes a video tutorial is the thing that gets me oriented enough to understand the written worthy I wasn't ready to process previously.
Edit: The ubiquity of video material probably has little to do with it's usefulness though, and everything to do with how easy it is to monetize on YouTube.
This isn't exactly an answer to your question, but an alternative monitoring architecture that elides this problem entirely is to run netdata on each server you run.
- It appears to collect WAY more useful data than uptime Kuma, and requires basically no config. It also collects data on docker containers running on the server so you automatically get per-service metrics as well.
- Health probes for several protocols including ping and http can be custom-defined in config-files if you want that.
- There's no cross server config or discovery required, it just collects data from the system it's running on (though health probes can hit remote systems if you wish).
- If any individual or collection of services is down, I see it immediately in their metrics.
- If the server itself is down, it's obvious and I don't need a monitoring system to show a red streak for me to know. I've never wasted more than minute differentiating between a broken service and a broken server.
This approach needs no external monitoring hosts. It's not as elegant as a remote monitoring host that shows everything from a third-party perspective, but that also has the benefit of not false-positiving because the monitoring host went down or lost its network path to the monitored host... Netdata can always see what's happening because it's right there when it happens.
Those of you who are married, how do you go about privacy if your wife or husband does not care?
I wouldn't say that my partner "doesn't care", but they take a much more pragmatic view than I which results in more exposure. In general, we do the following:
- To a first approximation, they decide what apps and services they use. It's not a monarchy. They'll ask for feedback when comparison shopping, but often the answer is "every dominant ecosystem in this space is terrible, the privacy respecting options don't meet your requirements, this option is 5% worse and this one is 5% better... glhf".
- For social media accounts that share posts about our nuclear family, we come to broad consensus on the privacy settings and practices. There's give and take here, but I make space to use dominant sharing apps and they make space to limit our collective exposure within reason. If I have a desire to "harden" the privacy settings on a service, it's on me to put in the effort to craft the proposed settings changes and get their buy in on the implications.
- I have many fewer privacy raiding accounts than they do. I both benefit from transitive access to the junk they sign up for, and pay a cost in my own privacy by association. This just is what it is. The market for partners that align with my own views perfectly is basically zero though, and honestly I probably wouldn't put up with my shit even if I could find one.
- If I can self-host a competitive option for a use-case that I'm happier with... they'll give it the old college try. But it has to actually be competitive or they'll fail out of the system and fall back to whatever works for them. If we can figure out what's not working we'll sometimes iterate together, but sometimes it's just not good enough and we go back to something I like worse.
It's basically like navigating any other conflict in values. You each have to articulate what your goals are, and make meaningful compromise on how to achieve something that preserves the essentials on both sides. As a privacy outlier, sometimes one also needs to be able to hear "I want to do normal shit and not feel bad about it" and accept it. But if we do want to reach for outlier privacy practices in some specific area, it's on us to break that desire down into actionable steps in realistic directions at a sustainable pace and to not ignore the impacts to our partners of the various tradeoffs we're proposing. Privacy is often uncomfortable and we need to acknowledge the totality of what we're asking for when we ask our partners to accommodate our goals there.
The headline of the article is just The History of the Modern Graphics Processor
, though. OP is having a fever dream with that post title, it has nothing to do with the article title or with the article.
Did the government invent OP to make us question Betteridge's law of headlines? Has the law of headlines become too dangerous to ignore?
- If a service supports sqlite, I often will use that option. It provides everything a self-hoster needs from a DB with basically no operational overhead.
- If I do need a proper RDBMS (because the software I'm using doesn't support sqlite), I'm going to use...
- A single Postgres container.
- Configured with multiple logical "databases" (the container for schemas and tables), one DB for each app connecting.
I do this because I'm always memory constrained and the rdbms is generally the most memory-hungry part of any software stack. By sharing one db-process across all the apps that need it I get the most out of my db cache memory, etc. And by using multiple logical db's, I get good separation between my apps, and they're straightforward to migrate to a truly isolated physical DB if needed... but that's never been needed.
... advertisement and push they did on sites like reddit...
The lemmy world admins advertised on Reddit? Can you link an example?
... their listing on join-lemmy.org...
Until recently EVERY lemmy instance was listed on join-lemmy.
And with the name Lemmy.world they did nothing to dissuade anyone from thinking that.
They run a family of servers under the world tld, including at least mastodon, lemmy, and calckey. They're all named similarly.
I also saw nothing from .world not claiming to be the bigger instance(super lemmy)
They ARE the biggest instance, but that happened organically. It's not based on any marketing claims from the admin team about being a flagship/super/mega/whatever instance. People just joined, and the admins didn't stop them (nor should they). It's not a conspiracy to take over lemmy. It's just an instance that... until recently... happened to work pretty well when some were struggling.
I think the issue is that .world has put itself forward as some sort of super lemmy.
Citation needed. All the admins of lemmy world ever purported to do was host a well-run general-purpose (aka not topic-oriented) lemmy instance. It was and remains that, and part of being a well-run general purpose instance is managing legal risk when a small subset of the community generates an outsized portion of it.
Being well run meant that they scaled up and remained operational during the first reddit migration wave. People appreciated that, but continuing to function does not amount to a declaration of being a super lemmy.
World also has kept signups open through good times, and more recently bad. Other instances at various times shut down signups or put irritating steps and purity tests along the way. Keeping signups open is a pretty bare-minimum bar for running a service though, it is again not a declaration of being a super-lemmy.
Essentially lemmy world just... kept working (until recently when it has done a pretty poor job of that). I dunno where you found a declaration that lemmy world is a super-lemmy, but it's not coming from the lemmy world admins, it's likely randos spouting off.
Are you aware of the modlog? Post removals generally do have an explanation. There's a handy one in your case:
Discussion on how to pirate games is not allowed. We'll update the rules to be more transparent about this.
https://sopuli.xyz/modlog?page=1&actionType=ModRemovePost&userId=1683334
Why would you use LVM to configure the RAID-1 devices? Btrfs supports raid1 natively.
I also confirmed this as a long-standing bug.
https://blog.mastodon.world/ posts monthly-ish finance updates. I've never heard about formalizing as a non-profit, and their choice to do so or not is not something I'm concerned about given their track record with masotodon world and their voluntary transparency.
Two tips:
I have not tried running WINE yet but I plan on doing so soon.
Steam "just works" on Linux, you can install it via flatpak (which I use) or from their deb repo. It includes "Proton", which is a fancy bundle of wine and some extra open source valve sauce to make it nice and easy to use. Any game that runs on the steam deck also runs on Linux via proton, and there's no messing around at all. It looks and feels just like steam on Windows, and thousands of games just work with no setup or config beyond clicking the big blue and green buttons to install and run. Not EVERY games works, but tons do. I'd heavily recommend this over raw wine to a beginner.
The second tip is not to ask what you can do on Linux. The answer, to a first approximation, is that you can do everything on Linux that you can do on Windows or OSX. I daily drive all three, and mostly do the same stuff on them. Instead, ask YOURSELF what you WANT to do on Linux. Then Google and ask us HOW to do it... or what the nearest approximation is if the precise thing you want to do doesn't work on Linux.
I use postgres for my install and had a similar thing happen to me. I tried moving an org credential to a folder, which moved the folder to the org, and kicked all other credentials to "no folder".
Thanks for confirming with your DB. That saves me sweating whether I should rebuild on PG at least, and also makes me feel better that it's a folder bug and not generalized database corruption.
Having finished the heavy organizing, my rate of big org transfers has slowed and I haven't reproduced again yet. Hopefully this will be uncommon enough to be a non-issue. Thanks again for the info.
Thanks for the suggestion, but sync seems to be working ok... at least on the read side. I was able to verify the pre-existing good state and the bad state afterward from multiple clients. If sync played into it, it must have been on a write somehow.
A very common DDoS attack uses UDP services to amplify your request to a bigger response, but then spoof your src ip to the target.
Having followed many reports of denial of service activity of Lemmy, I don't think this is the common mode. Attacks I'd heard of involve:
- Using regular lemmy APIs backed by heavy database queries. I haven't heard discussion of query rates, but Lemmy instances are typically single-machine deployments on modest 4-core to 32-core hardware. Dozens to thousands of queries per second to the heaviest API endpoints are sufficient to saturate them. There's no need for distributed attack networks to be involved.
- Uploading garbage images to fill storage.
Essentially the low-hanging fruit is low enough that distributed attacks, amplification, and attacks on bandwidth or the networking stack itself are just unnecessary. A WAF is still a good if indeed OPs instance is getting attacked, but I'd be surprised if wafs has built-in rules for lemmy yet. I somewhat suspect one would have to do the DB query analysis to identify slow queries and then write custom waf rules to rate limit the corresponding API calls. But it's worth noting that OP has provided no evidence of an attack. It's at least equally likely that they dos'ed themselves by running too many services on a crappy VPS and running out of ram. The place to start is probably basic capacity analysis.
Some recent sources:
There's https://lemmy.world/legal for a variety of instance policy things, but it doesn't cover privacy and I don't believe there is an official statement on that.
Having one would be nice, but my sense from how admins handle transparency in general is that the privacy practices here are best-in-clasd compared to commercial social media giants. That's speculation of course, but semi informed by watching how the admins have handled a wide variety of issues.
- Subscribe to some communities you DO like and use your subscribed feed more often. It's easier to subscribe to what you want then to block everything you don't.
- There's some app that does have this feature. I don't remember which, but you should be able to search it up. The normal way to do this is blocking at the server, and that's a frequently requested feature that hasn't been built yet... but apps can fetch the posts and just not display them... client-side blocking. A non-zero number have this feature to block an instance.
- Some day this will likely get built into lemmy itself and all clients will get it.
Multi-reddits are a frequently requested feature and there's a GitHub issue for them: https://github.com/LemmyNet/lemmy/issues/818
Core devs for the most part are focused on existential issues like performance (slow DB queries are responsible for the denial of service attacks that are taking lemmy world down daily) and moderation tools (lack of which are responsible for major instances defederating with each other rather than moderating more aggressively). Unless a community dev steps up to work on multi-reddits, it's likely to sit at the back of the line for a few months.
It's also possible that some app will jump the like with client-side-only multireddits. If that's a thing, I haven't heard about it yet. Maybe someone else will chime in if they have.
This is a great approach, but I find myself not trusting Jellyfin's preauth security posture. I'm just too concerned about a remote unauthenticated exploit that 2fa does nothing to prevent.
As a result, I'm much happier having Jellyfin access gated behind tailscale or something similar, at which point brute force attacks against Jellyfin directly become impossible in normal operation and I don't sweat 2fa much anymore. This is also 100% client compatible as tailscale is transparent to the client, and also protects against brute force vs Jellyfin as direct network communication with Jellyfin isn't possible. And of course, Tailscale has a very tightly controlled preauth attack surface... essentially none of you use the free/commercial tailscale and even self-hosting headscale I'm much more inclined to trust their code as being security-concscious than Jellyfin's.