Skip Navigation

User banner
Posts
6
Comments
27
Joined
2 yr. ago

  • Federation between Onion and Standard Domains that way tor users would not be isolated

    This is the hardest part as you would need to be both have an onion and have a standard domain, or be a tor-only Federation.

    You can easily create a server and allow tor users to use it, which unless a Lemmy server actively blocks tor, you'd be welcome to join via it. But federation from a clearnet to onion cannot happen. It's the same reason behind why email hasn't taken off in onionland. The only way email happens is when the providers actively re-map a cleanet domain to an onion domain.

    This is what Lemmy would need to do. But then you would have people who could signup continuously over tor and reek havok on the fediverse with no real stopping them. You would then have onion users creating content that would be federated out to other instances. & User generated content from tor users also is ... Not portrayed in the best light.
    I'm sure someone will eventually create an onion Lemmy instance, but it has it's own problems to deal with.
    This is especially true for lack of moderation tools, automated processes, and spammers who already are getting through the cracks.

  • I can confirm the sections around downvotes as Reddthat has the stance exact what you are talking about (re your child comments)

    A downvote disabled instance creates it's own algorithm/feed/ranking based purely on all other metrics, because as far as the data is concerned, it sees every post having 0 downvotes. It does not take into account external instances.

  • I can answer the first point.
    We've already tackled part of that problem with the Parallel Sending feature that can be enabled on instances with a tremendous amount of traffic. Currently the only instance that makes sense to enable that is LemmyWorld and the only reason is so servers in geographical far away can get more than 3-4 activities/second.

    With that feature, servers that eventually house and generate the biggest amounts of traffic will be able to successfully communicate all of those activities to everyone else who needs them.

    I predict a 10x increase is well in our grasp of easily accessible by all of our current systems. 1000x? That's a different story which I don't have the answers too.

  • 😁👍 Happy to be a sacrifice for the greater good

  • I can't wait! We'll finally get to do a real world test for the parallel sending features!! And if all goes well I'll get to save 5 Euro a month!

    Thanks LW

  • Thank you for the update and it's good to hear your upcoming plans. Being one of those people in Australia (Reddthat) it will be good to see if it actually works as it's designed too!
    I'd love to save $7/m to not have a server dedicated to batching the federation traffic 😅

    When you lay out the timelines for 0.19.3 onwards no time at all has gone by, and having to deal with the issues after .3 has certainly not been fun as an admin. (And I'm only a small server compared!)
    Being such a huge player in our Lemmyverse, thanks for taking the time to plan this out as I know how much testing has been done to get us this far.

    It's always a nice experience chatting to the LW team!
    Hope your updates go smoothly!

  • I too would love to know what your experiencing (so I can fix it!)

  • This is sso support as the client. So you could use any backend that supports the oauth backend (I assume, didn't look at it yet).

    So you could use a forgejo instance, immediately making your git hosting instance a social platform, if you wanted.
    Or use something as self hostable like hydra.

    Or you can use the social platforms that already exist such as Google or Microsoft. Allowing faster onboarding to joining the fediverse. While allowing the issues that come with user creation to be passed onto a bigger player who already does verification. All of these features are up for your instance to decide on.
    The best part, if you don't agree with what your instance decides on, you can migrate to one that has a policy that coincides with your values.

    Hope that gives you an idea behind why this feature is warranted.

  • Australia @aussie.zone

    'Big, massive deterrent': Social media companies could face fines for allowing kids under 14 on their platforms

  • The script will be useless to you, besides for referencing what to do.

    Export, remove pg15, install pg16, import. I think you can streamline with both installed at once as they correctly version. You could also use the in place upgrade. Aptly named: pg_upgradeclusters

    But updating to 0.19.4, you do not need to go to pg16... but... you should, because of the benefits!

  • Congratulations! 👏 Happy B-day and here's to many more to my across the river friends 🎉

    Ps. The video works and is great!

  • That awkward moment when you are the person they are talking about when running beta in production!

  • Since the 11th @ 9am UTC, LW has seen a 2 fold increase of activities. If my insider knowledge is right (and math) it's 7req/s average up from 3req/s.

    Lucky for both of us we are not subbed to every community on LW but I think we are subbed just enough to be affected.

  • Relevant: https://reddthat.com/comment/8316861 tl;dr. The current centralisation results in a lemmy-verse theoretical maximum for of 1 activity per 0.3 seconds, or 200 activities per minute. As total transfer of packets is just under 0.3 seconds between EU -> AU and back.

    Edit: can't math when sleepy

  • We rebuilt the Lemmy container with an extra logging patch. Seems build docs need some work? as that's the only difference in the past 1-2 days, except for moving to postgres 16...

    Thanks for the ping.

    I've gone back to mainline Lemmy. @Morpheus@lemmy.today check now please

  • See my PR for a new backup script. https://github.com/LemmyNet/lemmy-ansible/pull/210

    I'll get to adding it to the main docs on the weekend.

    Tldr, piping your backups via docker is CPU expensive. Directly writing to filesystem in a postgres compatible format with compression is faster and more efficient on the CPU.

    My 90GB+ (on filesystem) db compresses to 6GB and takes less than 15 mins.

  • Memes @lemmy.ml

    Saturday Morning Breakfast Cereal - Never had

    Memes @lemmy.ml

    Saturday Morning Breakfast Cereal - Rite

    Memes @lemmy.ml

    TITLE

    Sysadmin @lemmy.ml

    Happy 50th birthday, Ethernet | APNIC Blog

    Sysadmin @lemmy.ml

    GitHub - LemmyNet/lemmy-ansible: A docker deploy for ansible