Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SP
Posts
5
Comments
113
Joined
2 yr. ago

  • Using a “laundry basket with a search robot” IS inherently a worse way to store data than a “file system with hierarchy”.

    Nested folders are reliable and predictable.

    Tagging is also a good option.

    Relying on search that is likely to fail in predictable ways is an awful way to do anything serious. And therein lies the problem... These people have mostly never done serious work with a computer before, that other people rely on. As soon as someone else stands to lose money or fail a class because you can't find a file, the distinction will come into sharp focus.

  • I wasn't referring to managing. I was referring to posting, as a user. If you haven't done that either, that doesn't mean you can't find all the discussions and decisions and rules and policies people came up with over the last four decades.

  • It could be implemented on both the server and the client, with the client trusting the server most of the time and spot checking occasionally to keep the server honest.

    The origins of upvotes and downvotes are already revealed on objects on Lemmy and most other fediverse platforms. However, this is not an absolute requirement; there are cryptographic solutions that allow verifying vote aggregation without identifying vote origins, but they are mathematically expensive.

  • Web of trust is the solution. Show me vote totals that only count people I trust, 90% of people they trust, 81% of people they trust, etc. (0.9 multiplier should be configurable if possible!)

  • Web of trust. The biggest thing missing from most attempts to build social networks so far. A few sites did very weak versions, like Slashdot/s friend/foe/fan/freak rating system.

    Let me subscribe, upvote, downvote, filter, etc specific content. Let me trust (or negative-trust) other users (think of it like "friend" or "block", in simple terms)

    Then, and this is the key... let me apply filters based on the sub/up/down/filter/etc actions of the people I trust, and the people they trust, etc, with diminishing returns as it gets farther away and based on how much people trust each other.

    Finally, when I see problematic content, let me see the chain of trust that exposed me to it. If I trust you and you trust a Nazi, I may or may not spend time trying to convince you to un-trust that person, but if you fail or refuse then I can un-trust you to get Nazi(s) out of my feed.