Skip Navigation

Posts
3
Comments
113
Joined
2 yr. ago

    1. Yes most trackers have something on their website to let you know what your ratio is, what you're downloading and how long you've been seeding those files.
    2. With the trackers I'm familiar with yes -- seeding for 9d 23h 59m and 59s is the same as seeding for 0s. You'll still get tagged with a HnR (Hit and Run)
    3. You can shutdown as much as you like. But, again the trackers that I'm familiar with have a cap on the number of HnRs you can have on your account. So you might have action taken against you if you're seeding 5 different torrents and decide to shutdown.
    4. Don't know.
    5. The rest don't appear to be questions so not sure how to respond.
  • Cloudflare? Namecheap?

    Not sure exactly what features you're after but the vast majority of them support what you mentioned above.

  • Btw I appreciate the fediverse and decentralization as much as the next guy, heck I'm even writing software for the fediverse. But I feel like there's a handful of people out there that want to try and apply the fediverse concept to everything. Similar to what happened with Blockchain. Everyone and everything had to be implemented via Blockchain even if it didn't make sense in the end.

    IMO though, GitHub is just one "instance" in an already decentralized system. Sure it may be the largest but it's already incredibly simple for me to move and host my code anywhere else. GitHub's instance just happens to provide the best set of tools and features available to me.

    But back to my original concerns. Let's assume you have an ActivityPub based git hosting system. For the sake of argument let's assume that there's two instances in this federation today. Let's just call them Hub and Lab....

    Say I create an account on Hub and upload my repository there. I then clone it and start working... It gets federated to Lab... But the admin on Lab just decides to push a commit to it directly because reasons... Hub can now do a few things:

    1. They could just de-federate but who knows what will happen to that repo now.
    2. Hub could reject the commit, but now we're in a similar boat, effectively the repo has been forked and you can't really reconcile the histories between the two. Anyone on Lab can't use that repo anymore.
    3. Accept the change. But now I'm stuck with a repo with unauthorized edits.

    Similarly if Hub was to go down for whatever reason. Let's assume we have a system in place that effectively prevents the above scenario from happening... If I didn't create an account on Lab prior to Hub going down I now no longer have the authorization to make changes to that repository. I'm now forced to fork my own repository and continue my work from the fork. But all of my users may still be looking for updates to the original repository. Telling everyone about the new location becomes a headache.

    There's also issues of how do you handle private repositories? This is something that the fediverse can't solve. So all repos in the fediverse would HAVE to be public.

    And yes, if GitHub went down today, I'd have similar issues, but that's why you have backups. And git already has a solution for that outside the fediverse. Long story short, the solutions that the fediverse provides aren't problems that exist for git and it raises additional problems that now have to be solved. Trying to apply the fediverse to git is akin to "a solution in search of a problem", IMHO.

  • I don't get what benefit hosting your own git brings to be honest

    Just another level of backup. Personally I tend to have:

    1. A copy of my repo on my dev machine
    2. A copy on a self hosted git server. Currently I'm using gitbucket though.
    3. A copy on GitHub.

    This way I should always have 2 copies of my code that's accessable at all times. So that there's very slim chance that I'll lose my code, even temporarily.

  • The target folder may be quite large. You can look at the dependencies for my project but my end binary is only a few MB.

  • IMHO federation doesn't bring any real benefits to git and introduces a lot of risks.

    The git protocol, if you will, already allows developers to backup and move their repositories as needed. And the primary concern with source control is having a stable and secure place to host it. GitHub already provides that, free of charge.

    Introducing federation, how do you control who can and cannot make changes to your codebase? How do you ensure you maintain access if a server goes down?

    So while it's nice that you can self host and federate git with GitLab, what value does that provide over the status quo? And how do those benefits outweigh the risks outlined above?

  • Anyway I think we can build a search engine that can respect their wishes and keep them out of the index.

    I'd love to hear your ideas on this. At least initially I don't see a way to build a crawler that can ignore select individuals. But I haven't really dug into the Mastodon APIs just yet.

    1. Would you be able to share the results of the survey here once it's complete?

    I don't run my own instance, at least not publicly, but I'm very curious in the answers to some of those questions and I don't want to skew your results by voting.

    1. Primarily I'm curious what feedback you get on search.

    As I'm already working on on search engine, that's already public for Lemmy but I hope to add Mastodon in the future. So far many Mastodon users seem to be very anti-search, so I'm curious of your results.

  • Let me introduce you to https://sense.com/ and help you create a new obsession.

    P.s. it's not perfect as it uses machine learning to determine your appliances and it can't find electronics like your computer or TV but it'll help you find what might be chipping away at your power bill.

  • Ya, I've got a few public services out there and I would love for a better way to manage them. But the fewer ports I open the better. I think there's also portainer edge agent that's more secure for prod environments, but I've yet to look into it much.

  • Slightly off topic, but are there not security concerns about opening up a portainer instance to the internet? I run portainer for all of my intranet hosted containers but I have reservations about running either the agent or portainer itself on something external to my lan. It seems like an easy attack vector but maybe I'm just overly worried?

  • Let's say I just sent a request from my non-existent server with my user id...

    Who or what is going to send this request if not some server that implements ActivityPub? This could be a Lemmy or Mastodon or Kbin instance... Or anything else that implements ActivityPub.

    ...and just every time I wanted to check whether I got replies I would query the other server (which a Lemmy server would do to get notifications about replies or upvotes)

    ActivityPub works via pushes. So there's nothing to query. There HAS to be some server for it to send and store that data.

  • So you can't just send data from a domain. There has to be a service running behind that domain name to do something.

    Without a server, it'd be like asking "why do I need tires on my car?". Well it's not going to go anywhere without them.

    Now this could be a private instance with only you as the single user. And it could federated with the rest of the fediverse. But you still have to run some software to do that.

    Now in theory someone I guess could come up with a slim version of Lemmy that only has a single user, and you can't post or comment directly to that instance but again something has to be running on a server behind that domain.

  • There are APIs that you can use to post and comment, etc... But there has to be an instance at that domain.

    That's how most of the mobile apps work btw, they just send a network request direct to lemmy.world etc... Saying that iopq wants to create a post with this title and this content....

    But there's no need for your own personal domain name in this scenario, you just need an account on the server that you're trying to post or comment to.

    Not sure if that helps.

  • So here's my current setup (each one is a separate docker container):

    Download machine: (has lots of RAM and HDD space)

    • Nginx (for reverse proxy)
    • Sonarr (tv)
    • Radar (movies)
    • Prowlarr (organizing download sources)
    • Qbittorrent (make sure to bind to Wireguard interface)
    • Wireguard (for qbittorrent VPN)
    • Nzbget (Usenet)
    • Szabnzb (also Usenet; some providers work better with Szabnzb for whatever reason)
    • Portainer agent (for remote docker management)
    • Watchtower (for automatic updates)

    Tv machine: (can transcode)

    • Nginx
    • Jellyfin (to transcode and actually watch the content)
    • Portainer agent (for remote docker management)
    • Watchtower (automatic updates)

    I'm not aware of a single container that has all of this bundled together though.

  • Can you elaborate more on what exactly you're looking for?

    There are docker containers for nzbget and sabnzb, two download clients... But I'm not sure if that's what you're after.