Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HN
Posts
3
Comments
469
Joined
2 yr. ago

  • Hmm, I get you. But I don't think that's what this discussion is about. I'm more concerned with the technical difficulties / impossibilities / inconsistencies with the approach. Less so if it should replace the current solution or a possible upgrade path. That's something to worry about later. It's more like I don't think it's going to work properly. It's more combining the disadvantages of two different approaches.

    But I'm happy if someone goes ahead and does a better approach. I also see the shortcomings of the current solution. Maybe I'm being too pessimistic.

  • I really agree with your premise. Less responsibility on the server means less depending on it. We'd gain independence, could move accounts and do some more nuanced things. But I really think the less your own server or relay does, the more you're prone to suffer from network outages, other servers becoming unavailable etc. So you'd need to duplicate everything no matter what to compensate for that. And you introduce lots of additional traffic by fetching all the hashtags from everywhere. Or you'd end up in the same situation as before where they're subject of availability on your instance or perspective on the network.

    Plus you want unsubscribed old posts showing up and a perspective that's independant of the chosen instance. So you basically need to replicate everything everywhere. And this introduces additional complexity and resource usage and your goal was to reduce that. (And federation becomes just an inconvenience and additional unnecessary work at that point.)

    It's not that it's technically difficult. We could do that. And you're right by pointing at XMPP and Movim and stuff. But that also doesn't solve most of the issues you outlined in your initial post. It's even more narrow in how you rely on your own server and shaping your perspective on the whole network.

    And sometimes this is what we want. People do dedicated instances to a topic. For example a Mastodon server for IT and tech people. Of course you want IT related stuff to show up on your main page. And we sometimes want moderation and a place to have civilized discussions. Not a place of anarchy and shitposting like on 4chan. That requires some form of hierarchy or democracy. And at the end of the day the server operators are responsible for what content is shared (publicly) via their infrastructure...

    So I'd say you can't achieve all your goals with ActivityPub. You need to think bigger. Maybe do away with federation altogether. Since federation is all about having different instances with a different focus and perspective on the same network. Maybe focused on a language or subject or sub-community of users, different rules and moderation. And you want more a unified perspective, everyone gets the same and less intermediaries. I'd say that is fundamentally incompatible with this form of federation and kind of out of scope. You probably want a network without that hierarchy. And that comes with different technical challenges and advantages.

    (And suppose we extended ActivityPub. Instead of separating and moving stuff to the client, we could imagine you install a Lemmy or Mastodon server/instance on your computer or phone. Along your browser. You'd have it all on your device and could configure it like you wanted. I'm not sure if that'd be a superior solution.)

  • I have Debian on my servers for a decade or so, and on several workstations. My past experience doesn't quite reflect that. The Debian guys and gals have always been pretty quick with patching the vulnerabilities. Like outstanding fast.

    There is some merit to the bugfixing. But that's kind of the point of Debian Stable(?!) Like in the meme picture of this post I don't want updates each day. And I also don't want the software on my servers to change too much on their own. I know my bugs and have already dealt with them and I'm happy that it now works seamlessly for 6 months or so...

    And that's also why I have Debian Testing on my computer. That gives me sort of an unofficial rolling distro. With lots of updates and bugfixes. I mean in the end you can't have no updates and lots of updates at the same time. It's either - or. And we can choose depending on the use-case. (I think the blame is on the admin if they choose a wrong tool for a task.)

  • Ah, you're right. Nostr uses relays. Now I know what the name stands for. Sounds a bit like your proposal in extreme. The "servers" get downgraded to relatively simple relays that just forward stuff. The magic happens completely(?) on the clients.

    I'm still not sure about the application logic. Sure I also like the logic close to me (the user.) The current trend has been towards the opposite for quite some time. Sometimes the explanation is simple: If you do most things on the server, you retain control over what's happening. That's great for selling ads and controlling the platforms in general. On the other hand it also has some benefits for power efficiency on the devices. I'm not talking about computing stuff, but rather about something like Google Cloud Messaging which has the purpose of reducing the amount of open connections and power draw and combine everything into a single connection for push messages. In order to do decide when to wake a device, it has access to to the result of the filtering and message priorization. Which then needs to be done server-side.

    I'm also not sure with the filtering of hashtags. I mean if you subscribe to a hashtag. Or want to count the sum to calculate a trend... Something needs to work through all the messages and filter/count them. Doesn't that mean you'd need all Mastodon's messages of the day on your device? I'm sure that's technically possible. Phones are fast little computers. And 4G/5G sometimes has good speed. But l'm not sure what kind of additional traffic you'd estimate. 50 Megabytes a day is 1.5GB for your monthly cellular data plan. A bit less because sometimes people are at home and use wifi... But then they also don't just use one platform, but have Matrix, Lemmy and Mastodon installed. And you can't just skip messages, you'd need to handle them all to calculate the correct number of upvotes and hashtag use. Even if the user doesn't open the app for a week.

    I don't quite "feel it". But I also wouldn't rule out the possibility of something like a hybrid approach. Or some clever trickery to get around that for some of the things a social network is concerned with...

    Or like something I'd attribute more to edge computing. The client makes all the decisions and tells the edge (router) exactly what algorithm to use to do the ranking, how to do the filtering and when it wants to be woken up... That device does the heavy lifting and caches stuff and forwards them in chunks as instructed by the client.

  • Hmmh. I can't really make an informed statement. I can't fathom qemu being experimental. That's like a 20 year old project and used by lots of people. I'm not sure. And I've yet to try Box64.

    I looked it up. The Snapdragin X Elite "Supports up to 64GB LPDDR5, with 136 GB/s memory bandwidth" while the Apple M2/M3 have anywhere from 100 GB/s memory bandwith to 150/300 or 400. (800 in the Ultra). And a graphics card has like ~300 to ~1000GB/s)

    (Of course that's only relevant for running large language models.)

  • Hmmh. But how would that then change Mastodon not displaying previous (uncached) posts? Or queries running through the server with it's perspective?

    And I fail to grasp how hashtags and the Lemmy voting system is related to a client/server architecture... You could just implement a custom voting metric on the server. Sure you can also implement that five times in all the different apps. But you'd end up with the same functionality regardless of where you do the maths.

    And if people are subscribed to like 50 different communities or watch the 'All' feed, there is a constant flow of ActivityPub messages all day long. Either you keep the phone running all day to handle that. Or you do away with any notification functionality. And replicating the database to the device either forces you to drain the battery all day, or you just sync when the user opens the App. But opening Lemmy and it takes a minute to sync the database before new posts appear, also isn't a great user experience.

    I'd say we need nomadic identity, more customizability with the options like hashtags, filters and voting. Dynamic caching because as of now Fediverse servers regularly get overwhelmed if a high profile person with lots if followers posts an image. But most of that needs to be handled by servers. Or we do a full-on P2P approach like with Nostr or other decentralized services. Or edge-computing.

    I don't quite get where in between federated and decentralized (as in p2p) your approach would be. And if it'd inherit the drawbacks of both worlds or combine the individual advantages.

    And ActivityPub isn't exactly an efficient protocol and neither are the server implementations. I think we could do way better with a more optimized, still federated protocol. Same with Matrix. That also provides me with a similar functinality my old XMPP server had, just with >10x the resource usage. And both are federated.

  • I think I can agree with that. For me it's a bit the other way around. My friends aren't on Discord. But the network effect is kind of hard to overcome. I'd say you can learn about privacy and new (to you) software and protocols by spending two or three evenings of your life. But convincing all your friends so it becomes any fun is considerably harder. I'd just name the actual issue, then. Otherwise people confuse it with Linux or Signal/Matrix/whatever being harder to operate.

  • Because with all of that, messaging, email, xmpp, matrix and ActivityPub most of the magic happens on the server. Take email for example. The server takes care to be online 24/7. It provides like 5GB of storage for your inbox that you can access from everywhere. It filters messages and does database stuff so you can habe full text search. Same with messaging. Your server coordinates with like 200 other servers so messages from users from anywhere get forwarded to you. It keeps everything in sync. Caches images so they're available immediately.

    That allows for the clients/Apps to be very simplistic. It just needs to maintain one connection to your server and ask if there's anything new every now and then. Or query new data/content. Everything else is already taken care for by the server.

    OP's suggestion is to change that. Move logic into the client/App. But it's not super easy. If you now need to be concerned on the client with maintaining the 200 connections at all times instead of just 1 to see if anyone replied... Your phone might drain 200 times as much battery. And requiring the phone to be reachable also comes with a severe penalty. Phones have elaborate mechanisms to save power and sleep most of the time. Any additional network activity requires the processor and the modem to stay active for longer periods of time. And apart from the screen thats one of the major things that draws power.

  • Well, the obvious answer to nearly all those broad questions is: "It depends..."

    But I mean what "work" and "effort"? I mean using Matrix isn't exactly hard... You need to install an App, register for an account, think of a password and log in... That's pretty much the same complexity as with Facebook or Discord?!

    Surely issueing big tech companies a blank cheque for your life is easy. And you get free services in return. But I don't think using privacy respecting services and even Linux to do your office stuff is substancially more difficult than giving away all your data.

  • Wow. That settles the discussion pretty quickly...

    I'm not sure with the transition layer... Isn't there things like qemu and box64... And multiarch support is part of most of the Linux distributions as of today? I always thought it's just a few commands to make your system execute foreign binaries. I mean I've only ever tried cross-compiling for arm and running 32bit games on amd64 architecture so I don't know that much. In the end I don't use that much proprietary software, so it's not really any issue for me. >99% of Linux software I use is available for ARM. But I can see how that'd be an issue for a gamer, regardless of the operating system being Windows or Linux or MacOS.

    And I'm not really interested in the AI coprocessor itself. The real question for me is: Can it do LLM inference as fast as a M2/M3 Macbook? For that it'd need RAM that's connected via a wide bus. And then there's the question what does a machine with 64GB of RAM cost. That's the major drawback with a Macbook because they get super expensive if you want a decent amount of RAM.

  • That's a nice idea but has some pretty obvious technical drawbacks that aren't discussed in the blog article:

    The complexity of most networks grows about exponentially with the number of connections between the entities. It gets immensely more computationally expensive that way and you're bound to use lots of additional network traffic and total cpu power that way.

    And some (a lot of) people like using social media on their phones instead of a computer. You're bound to drain their batteries real fast by moving application logic there.

    Other than that I like the general idea. The Fediverse should be more dynamic. Caching and discovery have some big issues in the current form. That should be tackled and we need technical solutions for that. And the current architecture isn't perfect at all.

    Furthermore, if talking about the edge where networks are smarter... Why then move it into the browser which isn't at the edge? Wouldn't that be an argument to invent edge-routers like in edge computing? I mean with c2s you have a server on the one side and a client on the other side with the edge somewhere in between. If you now flip it you end up in a different situation. But there's still nothing at the edge where you could introduce some smarts...

  • Maybe you can find a guide/tutorial on how to set it up?

    Usually you need the correct packages installed on your system to enable something like VAAPI or QSV. Then you need a version of ffmpeg with that enabled. And then configure it in Jellyfin correctly.

    I don't have any specific insights on how to do it with Fedora. I suppose it's very similar to how it's done on other Linux distros.

  • Hmm. There is value in both. When I started out with NixOS I read lots of wiki articles. And we all know there is some room for improvement. And I also read several configs of other people to see how things tie together. And to look up things that aren't documented. Nowadays I just put in what I'm looking for and "language:nix" into Github. So there's lots of personal configs that turn up. Sometimes with useful stuff. So I think anything is better than nothing. But obviously if you have kids, prefer them and let other people come up with the detailed wiki articles 😆

  • Fair enough. I personally think someday someone will have the same niche issue I've already tackled and be happy to stumble over my code while googling it. So I just drop most things I do somewhere for other people to find. Regardless.

    But concerning NixOS, I also still need to switch over a few things to agenix and generalize parts of my config before publishing it.

  • I'd recommend YunoHost, too. It's pretty beginner friendly and you'll probably get some positive results without learning all at once. I mean you have quite something on your plate if you're learning Linux, Docker, Docker-Compose and maybe networking and Dev-Ops all at the same time.