Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AN
Posts
1
Comments
107
Joined
2 yr. ago

  • It's totally not crazy thinking. :) I think the main problem is that while Mastodon and Lemmy implement the server to server part of ActivityPub, they don't implement the client to server part of the standard, and instead build their own REST API and client. This is why, while you can subscribe to actors from an other application, it looks bad : it's supposed to be consumed in their own client, or something that tries to emulate it (that, and the fact that they each implement their own extensions to ActivityPub, it doesn't help).

    In a perfect world, ActivityPub based applications would implement the client to server part or the standard too, so that we have a multitude of third party clients that can consume data from any ActivityPub based application without looking broken. I certainly hope we go in this direction in the future, because interoperability looks half-baked, as it is right now, and the fediverse would be just more awesome with such upgrade.

  • Who in a decentralised system can or should take responsibility?

    The customers. :) Ear me out:

    The main reason why there is so much problems with deliveries (way more with UPS, DHL and the likes than Uber Eats deliverers, in my own experience) is because we're not their customers, in their heads. They're paid by the merchants (UPS/DHL/etc are paid by your shop keeper), or they're paid by the platform (rider is paid Uber Eats/Deliveroo/etc), but the end customer is just part of the constraints, for them, especially since the customer doesn't even choose who will deliver their package (you don't like UPS? Too bad!). Give the customer that choice, and make them pay directly for the delivery to the deliverer, and I guarantee all those problems will go away. This is why I said we need a decentralized reputation system : so that the customer can see the reputation of local delivery service before selecting them.

    When the problem is with the shop, well, this is already sort of dealt with. We already have reviews systems and we already select our shops, so it does happen that shops behave poorly, but not for long. Although, users have to be educated about verifying reviews, and developers have to implement countermeasures and stay on top of the review cheating game.

    And to avoid problems with the platform, we have the interoperability of standards like ActivityPub : there is one global network (like the fediverse, or the web), and multiple programs are implemented to use it. They have a incentive to work well because there is competition, something that centralized platforms eliminate altogether.

  • Indeed, there needs to be third parties who control quality - just like there are moderators here, if you think about it. We already sort of have those moderators : local shop keepers. I'd be satisfied with a service that allows me to leverage all those local shops without having to leave my place (as I mentioned in an other comment on this page : "Uber Eats/Deliveroo for everything").

  • Totally agree, what we really need is a "Uber Eats/Deliveroo for everything", leveraging local businesses. And if we can get a decentralized reputation system, then such platform can be decentralized as well.

  • as in my experience, most regular users do not have a Matrix client installed

    I understand your point, but by that logic, we should use Reddit rather than Lemmy, as most users are there. It's not only about ease of use, it's about being sure users won't be abused. Discord is still in its acquisition phase, but you can be sure enshitification will come next.

  • Thanks for raising the issue.

    Most probably, people who made that decision are not aware of the implications and made that choice in good faith, so it's worth giving reasons why you want them to avoid proprietary software, rather than just frowning at them.

    To the admins of lemmy.world and anyone who feels confuse about why this is an issue : it is about freedom. You all know how Facebook, Twitter, Reddit, etc are turning ugly, and you can't do anything about it. With FOSS (Free and Open Source Software), when it turns ugly, you can do something about it. You (or any technical person who agrees with you) can take the code and go your own way with it (we call that "forking"). No decision of the authors can be forced upon you. Similarly, if you think something is not working right, you can fix it yourself, and send the changes to the maintainers of the code, who usually are happy to get some help. So it's also about freedom of fixing your own problems, instead of waiting and praying the authors do something about it.

    And this is the whole spirit of the Fediverse : taking matters in our own hands instead of being betrayed once more by a company which decides that their bottom line requires to be user hostile. One day, this will happen to Discord to, it always ends up there. That's why people using Lemmy who are aware of those problems are not happy with seeing lemmy.world use Discord.

    Thanks to the admins of lemmy.world for all the work they provide to the Fediverse.

  • I do have to say for the purpose of tinkering I love these bigger projects because you learn so much on the way. Now having read your answer I am even more exited to try it out :D

    That's awesome to hear! Welcome, and have fun! :)

    I haven’t heard of most of your abbreviations/term till now

    Oh, my apologies. Here is a definition list :

    • SMTP : Simple Mail Transfer Protocol : the base of any mail system, it's the server you contact to send emails, which relays your mail to an other SMTP server (where your contact is hosted), which stores the mail for user to retrieve
    • IMAP : Internet Message Access Protocol : one of the protocols that can be used to retrieve emails from your mailserver (the other one being POP3)
    • SPF : Sender Policy Framework, a configuration on your domain name specifying which machines are allowed to send mails in its name
    • DKIM : DomainKeys Identified Mail : a signing process (signing each mail) to validate the "From" email address is indeed authorized from the domain it pretends to
    • DMARC : a warning system to let you know when someone pretended to be you (also giving instructions about what to do with emails when SPF and/or DKIM are missing or wrong)
  • I guess slapping it on my local raspberry pi wouldn’t be enough no?

    Oh no, that would be way not enough. :) Managing a mailserver is a sysadmin task by itself. While you don't need to do much once it works (which often is a perk of sysadmin work, compensating for the fact that when it does not work, they may have to wake in the middle of the night to fix it), it's notoriously difficult to get right : you have the configuration of the mailserver to get right first, so that you can send emails, but nobody else can and you don't become a spam relay without knowing it. Then you have a lot of configuration to do to be able to retrieve your emails from your server, which uses other protocols that you must learn about. Then you have "optional" things that you must setup (SPF, DKIM and DMARC), which you won't be able to send mails to gmail or outlook if you don't set them up properly. And when you will have got all of that right, you will have enough experience to be hired as a sysadmin. :)

    I can't provide a good resource for learning it, I learned it 15 years ago when it was way more simple (before SPF and DKIM), and picked every addition as they appeared, but any course on how to manage a mail system will do. There is no difference in doing it for your self-hosted server and for a company (except maybe that for a company, they'll make you handle users in a database, which you can forego for your own needs). I would recommend to learn how to use postfix first, then any imap server (courier-imap is a top runner), and when you're comfortable with that, you can learn about SPF, then DKIM, then DMARC. But be aware before going through it that this is basically learning a new skill (sysadmin). You can find docker images that setup everything automatically for you, but I would recommend against that, because at some point, things will break and you will have no idea how to fix them. And if you try to fix them while not knowing well what you're doing, that's a good way to end up being a spam relay. Plus, those docker images are difficult to customize, which quite defeats the point of managing your own mail system to begin with.

  • Well I didn’t want google to read my mails

    Sadly, it only works if no one in the recipients of the mail is on gmail (or if everyone use pgp, which I would tend to think is even more rare).

    I host my own mailserver as well, and I would add as benefits:

    • creating as many email address as you want easily, possibly regexp based address (awesome to give every site a different address and know where the spam comes from, without using the well known schema username+something@host). That also makes routing/filtering mails way more easy, you just have to match the recipient address.
    • delivering mails to software, to put email at the center of interapps messaging (basically, that means that postfix pass a matching email to the executable of your choice on your system instead of storing it in your mailbox)
    • advanced rules for handling emails. When I want to block a spammer that managed to get my real email, I use regexps to match their mails and reject it with a "REJECT 5.1.1 Recipient address rejected: User unknown in local recipient table" error, imitating the error for unknown users, which often triggers a mail system to remove your address from their database
    • easily configure apps to send me email. When I write an application that will send emails to me and only me, I configure it to use my smtp on port 25 without authentication instead of the usual smtps configuration they expect. It connects to it and asks to send a mail to me, which is accepted since I'm a local user. It makes everything way easier (try to do that with gmail and get your IP banned)
    • easy backups. Both of the mail system (I backup the whole sdcard of the pi) and of the emails. Never lose an email again.
  • Oh, I see. Totally makes sense. :)

    I guess it depends on the country, but here in France, yes, most landline ISPs provide static IPs (maybe all? there are a couple I haven't try ; mobile IPs are always dynamic, though). It was not always the case, but I haven't had a dynamic IP since the 2000'. I feel you, dealing with pointing a domain to a dynamic IP is a PITA.

    Ahah, yeah, I protected myself against accidentally banning my own IPs. First, my server is a Pi at home, so I can just plug a keyboard and a screen to it in case of problem. But more importantly, as I do that blacklisting through fail2ban, I just whitelisted my IPs and those of my relatives (it's the ignore_ip variable in /etc/fail2ban/jail.conf)., so we never get banned even if we trigger fail2ban rules (hopefully, grandma won't try to bruteforce my ssh!). It allowed me to do an other cool stuff : I made a script ran through cron that parses logs for 404 and checks if they were generated by one of the IPs in that list, mailing me if it's the case. That way, I'm made aware of legit 404 that I should fix in my applications.

  • Oh, ok, you whitelist IPs in your firewall. That certainly works, if a bit brutal. :) (then again, I blacklist everyone who is triggering a 404 on my webserver, maybe I'm not the one to speak about brutality :P ) You don't even need a VPN, then, unless you travel frequently (or your ISP provides dynamic IP, I guess).

  • I'm not sure about the feasibility of this (my first thought would be that ssh on the host can be accessed directly by IP, unless maybe the VPN software creates its own network interface and sshd binds on it?), but this does not remove the need for frequent updates anyway, as openssh is not the only software that could have bugs : every software that opens a port should be protected as well, and you can't hide your webserver on port 80 behind a VPN if you want it to be public. And it's anyway a way more complicated setup than just doing updates weekly. :)

  • If you do not neglect updates, then by all mean, changing ports does not hurt. :) Sorry if I may have strong reaction on that, but I've seen way too many people in the past couple decades counting on such anecdotal measures and not doing the obvious. I've seen companies doing that. I've seen one changing ports, forcing us to use the company certificate to log in, and then not update their servers in 6 months. I've seen sysadmins who considered that rotating servers every year made it useless to update them, but employees should all use Jumpcloud "for security reasons"! Beware, though, mentioning port changing without saying it's anecdotal and the most important thing is updates, because it will encourage such behaviors. I think the reason is because changing ports sounds cool and smart, while updates just sound boring.

    That being said, port scanning is not just about targeted pentesting. You can't just run nmap on a host anymore, because IDS (intrusion detection systems) will detect it, but nowadays automated pentesting tools do distributed port scanning to bypass them : instead of flooding a host to test all their ports, they test a range of hosts for the same port, then start over with a new port. It's half-way classic port scanning and the "let's just test the whole IP range for a single vulnerability" that we more commonly see nowadays. But they are way harder to detect, as they scan smaller sets of hosts, and there can be hours before the same host is tested twice.

  • The best you can do to know if it was an attack is to inspect the logs when you have time. There are a lot of things that can cause a process going wild without being an attack. Sometimes, even filling the RAM can cause the CPU to appear overloaded (and will freeze the system anyway). One simple way to figure out if it's an attack : reboot. If it's a bug, everything will get back to normal. If it's a DDoS, the problem will reappear up to a few minutes after reboot. If it's a simple DoS (someone exploiting a bug of a software to overload it), it will reappear or not given if the exploit was automated and recurring, or was just a one-shot.

    The fact that both your machines fell at the same time would tend to make think it's an attack. On the other hand, it may just be a surge of activity on the network with VPSes with way not enough resources to handle it. Or it may even be a noisy neighbor problem (the other people sharing with you the real hardware on which your VPSes run who will orverload it).

  • However Port 22 should never be open to the outside world.

    Wat. How do you connect with ssh, then? You can bind openssh to an other port, but the only thing it changes is that you have less noise in your logs. The real most important security measure is to make sure your softwares are always up to date, as old vulnerable software is the first cause of penetration (and yes, it's better to deactivate password login to only use ssh keys).

  • "karma" (as reddit calls scoring) never was more true to its name. :)

    I haven't looked at Lemmy's implementation of upvotes/downvotes, but they should be ActivityPub activities, so it means they should appear by making a request to the user's actor.

    EDIT: I've just checked random users outbox (that's the ActivityPub name for the list of activities), included mine, they are actually just empty. So that probably means that Lemmy is only publishing the upvotes/downvotes when pushing activities to federated servers, which would make those activities way more private, although not completely : someone could setup their own instance to learn about them, and it's best to be assume that at some point, someone will start such instance and publish an app revealing all votes for everybody (plus, as others mentioned, Kbin is already doing it).

  • I've been running my own email server for years, and while it's indeed difficult at first, it is possible and you don't have much to do to maintain it when it works. All the horror stories you hear come from the fact it's difficult to get right, and even when you get it right, you will have deliverability problems the first year, until your domain name gets established (and provided you don't use it for spam, obviously - and yes, marketing is spam).

    What you need :

    • being willing and serious about reading lot of documentation
    • an IP that is not recognized as a home IP. So you'll need a "business ISP", or one that is not well known. You bypass this problem by using AWS.
    • choosing a well recognized TLD for your domain name, like .com, .org, .net, etc. Don't use one of those fancy new extensions (.shop, .biz, etc), they are associated with spammers.
    • learning how SPF works and getting it right (there are plenty of documentation and test tools for that)
    • same for DKIM
    • same for DMARC

    Start using that for a year without making it your main address. Best is to use it for things not too mainstream, like FOSS mailing lists, discussing with people having their own mailserver, etc, those will not drop your mails randomly. When a year has gone with frequent usage, you can migrate to that email address or domain.

    Regarding the architecture of your network : do you read your emails on several machines (like, on mobile and laptop)? If not, you can dramatically simplify your design by using pop3 instead of imap, connecting your client to the AWS server, downloading all your emails to computer and removing them from the server at the same time. There, you have all your mails locally and you don't need dovecot. :)

  • I don't use a pihole, but I have a pi with my favorite distro acting as server, and I use dnsmasq for what you mention. It allows to set the machine as the nameserver for all your machines (just use its IP in your router DNS conf, DHCP will automatically point connected machines to it), and then you can just edit /etc/hosts to add new names, and it will be picked up by the nameserver.

    Note that dnsmasq itself does not resolve external names (eg when you want to connect on google.com), so it needs to be configured to relay those requests to an other nameserver. The easy way is to point it to your ISP nameservers or to public nameservers like those from Cloudflare and Google (I would really recommend against letting them know all domains you're interested in), or you can go the slightly more difficult way as I did, and install an other nameserver (like bind9) that runs locally. Gladly, dnsmasq allowed to configure its relay nameserver to be on something else than port 53, which is quite rare in dns world. Of course, if you're familiar with bind9, you could just declare new zones in it. I just find it (slightly 😂) more pleasant to work with /etc/hosts.

  • You're welcome. :) Oh yeah, you probably use a lot of them, they are everywhere, although it's not obvious to the user. One way to figure it out is to open the browser inspector (usually control + shift + i, same to close it) and look on the "network" tab, which lists all network requests made by the page, to see if this list gets emptied when you click a link (if it's a real new page, the list is emptied and new requests appear).

    My apologies, I spent an hour on the popstate problem before losing interest and calling it a day. Lemmy uses the inferno frontend framework (a clone of react), which uses the inferno-router router to handle page changes, which uses the history lib to do it, which… uses pushState as I expected it would. And yet, binding on popstate won't work. 🤷 Maybe I'll have an other look at it one day if it bugs me enough. :)