Skip Navigation

Posts
80
Comments
825
Joined
2 yr. ago

  • Could be, although codeberg goes down more than anything else I use so this could be normal.

  • You have the privilege of not giving a shit about racist attacks.

  • This ^ ^ ^ is what privilege looks like.

  • Anytime there is a open source "community edition" and a closed-source "enterprise edition" it's pretty suspect. There will always be a temptation to make the community edition a bit crippled, to drive sales of the paid version.

  • No, sorry, you can't subscribe to Lemmy communities on PieFed, that's not how it works. You have to subscribe to them on the Lemmy instance where they are hosted.

  • Yep, it federates with Lemmy well. But PieFed doesn't have many communities yet - see https://piefed.social/communities/local

    I mostly use it for posting to communities on Lemmy instances. For now.

  • During onboarding of a new account, https://piefed.social/ asks you this:

    ... and sets up an appropriate keyword filter based on your answer.

  • For fascists, regular displays of hypocrisy are important, because that's the guarantee that the bad things they obviously plan to do won't be done to their supporters.

  • The Devs of every fediverse software run the biggest instances of that type and those instances don't have this problem.

  • Thing is, search bars are for typing in keywords, not urls.

    Certain other federated reddit clones just have a 'add remote community' button on the communities list.

  • You're asking a lot from a LinkedIn post

  • Ok if you want to focus on that single phrase and ignore the whole rest of the page which documents decades of stuff to do with search engines and not a single mention of api endpoints, that's fine. You can have the win on this, here's a gold star.

  • It's been a consensus for decades

    Let's see about that.

    Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the "Robots Exclusion Protocol". In the FAQ at http://www.robotstxt.org/faq.html the first entry is "What is a WWW robot?" http://www.robotstxt.org/faq/what.html. It says:

    A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.

    That's not FediDB. That's not even nodeinfo.

  • Maybe the definition of the term "crawler" has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.

    Fedidb isn't doing anything like that so I'm a bit bemused by this whole thing.

  • lol FediDB isn't a crawler, though. It makes API calls.

  • Theoretically in the future PieFed might not be limited to only using ActivityPub, or only using Lemmy-compatible ActivityPub.