Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MG
Posts
16
Comments
399
Joined
4 mo. ago

  • Well it's a tougher question to answer when it's an active-active config rather than a master slave config because the former would need minimum latency possible as requests are bounced all over the place. For the latter, I'll probably set up to pull every 5 minutes, so 5 minutes of latency (assuming someone doesn't try to push right when the master node is going down).

    I don't think the likes of Github work on a master-slave configuration. They're probably on the active-active side of things for performance. I'm surprised I couldn't find anything on this from Codeberg though, you'd think they have already solved this problem and might have published something. Maybe I missed it.

    I didn't find anything in the official git book either, which one do you recommend?

  • Thanks for the comment. There's no special use-case: it'll just be me and a couple of friends using it anyway. But I would like to make it highly available. It doesn't need to be 5 - 2 or 3 would be fine too but I don't think the number would change the concept.

    Ideally I'd want all servers to be updated in real-time, but it's not necessary. I simply want to run it like so because I want to experience what the big cloud providers run for their distributed git services.

    Thanks for the idea about update hooks, I'll read more about it.

    Well the other choice was Reddit so I decided to post here (Reddit flags my IP and doesn't let me create an account easily). I might ask on a couple of other forums too.

    Thanks

  • This is a fantastic comment. Thank you so much for taking the time.

    I wasn't planning to run a GUI for my git servers unless really required, so I'll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.

    I think your idea of having a designated "master" (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won't do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.

    Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.

    Thanks again for the wonderful comment.

  • I think I messed up my explanation again.

    The load-balancer in front of my git servers doesn't really matter. I can use whatever, really. What matters is: how do I make sure that when the client writes to a repo in one of the 5 servers, the changes are synced in real-time to the other 4 as well? Running rsync every 0.5 second doesn't seem to be a viable solution

  • You mean have two git servers, one "PROD" and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.

    For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?

  • GitHub didn't publish the source code for their project, previously known as DGit (Distributed Git), now known as spokes. The only mention of it is in a blog post on their website but I don't have the link handy right now

  • Thank you. I did think of this but I'm afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won't be able to access my git server anymore.

    I edited the post which will hopefully clarify what I'm thinking about

  • Apologies for not explaining better. I want to run a loadbalancer in front of multiple instances of a git server. When my client performs an action like a pull or a push, it will go to one of the 5 instances, and the changes will then be synced to the rest.

    I have edited the post to hopefully make my thoughts a bit more clear

  • Apologies for not explaining it properly. Essentially, I want to have multiple git servers (let's take 5 for now), have them automatically sync with each other and run a loadbalancer in front. So when a client performs an action with a repository, it goes to one of the 5 instances and the changes are written to the rest.

    I have edited the post, hopefully the explanation makes more sense now

  • Dom0 being based on Fedora has been a gripe of mine for a while now. Fedora isn't that secure without some effort either. Unfortunately, I have no way to confirm which one out of them is "more secure".

    Do you have any sort of automated test framework in mind which one can use to test distros against attacks?

  • Thanks for the tip, love Capy.

    You're right, Whonix is probably the better idea. I use kick secure now but if I move to Qubes then I'll use Whonix as a default.

    I'll have to read more about secureblue. I haven't given the documentation as much time as I should have. I guess you could run an HVM for now.

    Why do you rank secureblue over Whonix?

  • Hey, I recognise you now! That was a great post, I had a lot of fun reading it. If I could follow people on Lemmy I'd follow you.

    What do you think about Kicksecure (and Kicksecure inside of Qubes)? I know they are criticised for backports but leaving that issue aside, I think they have created a very handy distro. I personally go through CIS benchmarks for most of stuff including Kicksecure but it's definitely nice to have a prey hardened distro (SecureBlue too but I hear SecureBlue isn't a big team, not sure how much time they have to address the broad range of desktop Linux security issues).

    Honestly, Qubes is the best at this by far. Their method of compartmentalisation takes away the complexity of reasonable security from the end-user making it a mostly seamless experience. I personally think that if you were to put GrapheneOS and Qubes OS side-by-side on uncompromised hardware, I'd take Qubes. I'd run GrapheneOS inside Qubes with a software/hardware TPM passed through if I could.

  • You can never be private with any device that can connect to the internet out of its own volition. Ubiquity, Alta Labs and Mikrotik should never be trusted unless you're OK with your data potentially ending up on their servers.

    With that said, you can manually upgrade Mikrotik software and selfhost the Mikrotik CHR, Ubiquity controller and Alta Labs controller for a fee (for the latter), which should then in theory invalidate this argument. Even then, I do not trust non-FOSS software for such critical infrastructure so it's still too much for me, but depending on your risk tolerance this might be a good compromise. I would suggest you to look at Mikrotik seriously - their UI might suck but their hardware and software capabilities are FAR beyond what Ubiquity offers for the same price.

    If you want to be private you should get an old computer, buy quad port NIC cards from EBay and run a Linux/BSD router on your own hardware. But that's not the most friendly way to do it so I don't blame anyone for looking away

  • Thanks. You are correct, however since root is required for certain processes, I will use different users and doas for my needs.

    I have realised that it is hard for me to justify the reason why I want to harden an OS for personal use. I gave privilege escalation out, but after reading your comment I have realised that that is not the only thing I am looking to "fix". My intention with running hardened_malloc was to prevent DoS attacks by malicious applications trying to exploit unknown buffer overflow situations, and LibreSSL and musl were to reduce the attack surface.

    I agree with your comment though. I'm just wondering about how I can specify a reason (and why such a reason is required to justify hardening of a distro). I haven't found much of a reason for the existence of OpenBSD, Kicksecure, Qubes etc other than general hardening and security.