Permanently Deleted
litchralee @ litchralee @sh.itjust.works Posts 1Comments 380Joined 2 yr. ago
Let's say you have a household of 5 people with 20 devices in the LAN, one can be infected and running some bot, you do not want to block 5 people and 20 devices.
Why not, though? If a home network is misbehaving, whoever is maintaining that network needs to: 1) be aware that there's something wrong, and 2) needs to fix it on their end. Most homes don't have a Network Operations Center to contact, but throwing an error code in a web browser is often effective since someone in the household will notice. Unlike institutional users, home devices are not totally SOL when blocked, as they can be moved to use cellular networks or other WiFi networks.
At the root of the problem, NAT deprives the users behind it of agency: they're all in the same barrel, and the maxim about bad apples will apply. You're right that it gets even worse for CGNAT, but that's more a reason to refuse all types of NAT and prefer end-to-end IPv6.
There are, but the process may be truly arcane -- 1993 for the .us process found in RFC 1480 -- but people have done it: https://web.archive.org/web/20160316224838/https://owen.sj.ca.us/~rk/howto/articles/usdomain/usdomain.html
I don't think I was? As a rule, I always remove the automatic +1 for my own comment, since I prefer to start the count from zero.
@jayemar already gave a valid counterpoint, about how to select the technocrats in the first place. But let's suppose we did somehow select the best and brightest of their fields. The next problem is that life is messy, and there often isn't a single answer or criteria which determines what is in the public interest.
Btw, for everyone's benefit, J-PAL is the Jameel Poverty Action Lab at MIT, with branches covering different parts of the world, since policies on addressing poverty necessarily differ depending on local circumstances. They might be described as a research institute or maybe a think tank, as they advocate for more-effective solutions to poverty and give advice on how to do that.
Poverty, as an objective, can be roughly distilled into bringing everyone above some numerical economic figure. There may be different methods that bring people out of poverty, but it's fairly straightforward to assess the effectiveness of those solutions, by seeing how many people exit poverty and how much the solution costs.
Now take something like -- to stay with economics -- management of the central bank. The USA central bank (The Federal Reserve) was created with a dual mandate, which means they manage the currency with care to: 1) not let inflation run amok, and 2) keep USA unemployment low. The dual mandate is tricky because one tends to begat the other. So when both strike, what should a technocrat do? Sacrifice one goal short-term to achieve the other long-term? Try attacking both but perhaps fail at either?
Such choices are not straight yes/no or go/no-go questions, but are rightfully questions of policy and judgement. Is it fine to sell 10% of parkland for resource extraction if it will iron-clad guarantee the remaining 90% is protected as wilderness for time immemorial? How about 25%? 60%?
Subject matter experts (SMEs) are excellent at their craft, but asking them to write public policy -- even with help from other SMEs -- won't address the fuzzy dilemmas that absolutely arise in governance.
In a democratic republic, voters choose not only the politician with views they agree with, but also are subscribing to that politician's sense of judgement for all of life's unknowns. Sometimes this goes well, sometimes that trust is misplaced. Although it's imperfect, this system can answer the fuzzy dilemmas which technocracies cannot.
Permanently Deleted
Irrespective of any subsequent arrests made, publicizing evidence of actual criminal activity is generally a social good, which often doesn't (but can) overlap with vigilantism. Taking the term broadly, vigilantism is doing something that the law can't/won't do. Wikipedia discusses the various definitions, some of which require the use of force (something conventionally reserved to the law or government) but the broadest definition would technically include whistleblowing and community activism. On the flip side, certain forms of publicizing evidence are illegal, such as leaking designated national secrets.
From a law perspective, in the USA, apart from that rather narrow exception and a few others, the First Amendment's guarantee of free speech provides the legal cover to reveal the genuine evidence of someone's criminal conduct, because criminal matters are: 1) in the public interest to expose, 2) an assailant cannot assert a privacy interest upon the evidence of their crime, and 3) the truth cannot be penalized by defamation claims. That basically covers any applicable USA free speech exceptions, although someone accused could file a frivolous lawsuit to financially harass the one who exposed the evidence. Such frivolous lawsuits are functionally banned only in the handful of states with anti-SLAPP laws, which is why more states and the feds need to adopt anti-SLAPP protections.
So from a legal perspective, leaking evidence of a crime is generally allowed. From a moral perspective, most would agree that it's a good thing, and it's why we have things like public trials, to showcase evidence. But does exposing crimes on one's own constitute vigilantism? I would say no, but others with a different definition might say yes, even if they also agree that's it's legally and morally correct.
!fitness@lemmy.world might appreciate this
You and friend 1 have working setups. Friend 2 can't seem to get their setup to work. So the problem has to be specific to friend 2's machine or network.
To start at the very basics, when WG is disabled, what are friend 2's DNS servers, as listed in "/etc/resolve.conf" (Linux) or in "ipconfig" on Windows. This can be an IPv4 or IPv6 address. Whatever it is, take note of it. Also try to ping it and make sure the ping is successful.
Then have friend 2 enable WG. Now try pinging the same DNS servers again. If this fails, you are one step closer to the problem. If this succeeds, then check to see if WG caused new DNS servers to replace the former ones.
One possibility is that friend 2's home network also uses 192.168.8.X, and so the machine tries to reach the DNS servers by going through WG. But we need more details before making this conclusion.
You also said friend 2 can ping 9.9.9.9 (aka Quad9), but is this friend using Quad9 as their DNS server? If so, what exactly is observed when you say that "DNS doesn't resolve"? Is this an error in a browser or the result from running "nslookup" in the command line?
IPv6 isn't likely to be directly responsible for DNS resolution failures, but a misconfigured WG tunnel that causes an IPv6 DNS server to be blackholed is one way to create resolution failure. It may also just be red herring, and the issue is contained entirely to IPv4. I would not recommend turning off IPv6, because that's almost always the wrong answer and sweeps the other problems under the rug.
It would help if you could recall what steps you did, a link to the instructions you followed, and what you're currently observing. Otherwise, we're all just guessing at what might be amiss.
I've been reading a lot of Soatok's blog, so when I see software that claims to be privacy-oriented, my first thought is: secure against what?
And in a refreshing change of pace, CryptPad actually outlines their threat model and how the software features might widen certain threats plus how to avoid those pitfalls. I'm not a security expert, but it's clear they paid at least some attention to assuring privacy, rather than just paying it lip-service. So we're off to a good start.
I select hostnames drawn from the ordinal numerals of whatever language I happen to be trying to learn. Recently, it was Japanese so the first host was named "ichiro", the second as "jiro", the third as "saburo".
Those are the romanized spellings of the original kanji characters: 一郎, 二郎, and 三郎. These aren't the ordinal numbers per-se (eg first, second, third) but are an old way of assigning given names to male children. They literally mean "first son", "second son", "third son".
Previously, I did French ordinal numbers, and the benefit of naming this way is that I can enumerate a countably infinite number of hosts lol
Kubernetes does indeed have a learning curve, but it's also strangely accommodating for single-node setups which can then be expanded only by adding components, rather than tearing the whole thing down and starting again. In that sense, it's a great learning platform towards managing larger or commercial clusters, if simply to get experience with the unique challenges inherent to scaling up.
But that might be more of a !homelab@lemmy.ml point of view haha
Ah, now I understand your setup. To answer the title question, I'll have to be a bit verbose with how I think Incus behaves, so that the Docker behavior can be put into context. Bear with me.
br0 has the same MAC as the eth0 interface
This behavior stood out to me, since it's not a fundamental part of Linux bridging. And it turns out that this might be a systemd-specific thing, since creating a bridge is functionally equivalent to a software switch, where every port of the switch has its own MAC, and all "clients" to that switch also have their own MAC. If I had to guess, systemd does this so that traffic from the physical interface (eth0) that passes directly through to br0 will have the MAC from the physical network, thus making it easier to understand traffic flows in Wireshark, for example. I personally can't agree with this design choice, since it obfuscates what Linux is really doing vis-a-vis a software switch. But reusing this MAC here is merely a weird side-effect and doesn't influence what Incus is doing.
Instead, the reason Incus needs the bridge interface is precisely because a physical interface like eth0 will not automatically forward frames to subordinate interfaces. Whereas for a virtual switch, that's the default. To that end, the bridge interface is combined with virtual ethernet (veth) interfaces -- another networking primitive in Linux -- to each container that Incus manages. The behavior of a veth is akin to a point-to-point network cable, plus the NICs on both ends. That means a veth always consists of a pair of interfaces, where traffic into one end comes out the other, and each interface has its own MAC address. Functionally, this is the networking equivalent of a bidirectional pipe.
By combining a bridge (ie a virtual switch) with veth (ie virtual cables), we have a full Layer 2 network topology that behaves identically as if it were a physical bridge with physical cables. Thus, your DHCP server is none the wiser when it sends and receives BOOTP traffic for assigning an IP address. This is the most flexible way of constructing a virtual network within Linux, since it has feature-parity with physical networks: there is no Macvlan or Ipvlan or tunneling or whatever needed to make this work. Linux is just operating as a switch, with all the attendant flexibility. This architecture is what Calico -- a network framework for Kubernetes -- uses, in order to achieve scalable, Layer 3 connectivity to containers; by default, Kubernetes does not depend on Layer 2 to function.
OK, so we now understand why Incus does things the way it does. For Docker, when using the Macvlan driver, the benefits of the bridge+veth model are not achieved, because Macvlan -- although being a feature of Linux networking -- is something which is implemented against an individual interface on the host. Compare this to a bridge, which is a standalone concept and thus can exist with or without any interfaces to the host: when Linux is actually used as a switch -- like on many home routers -- the host itself can choose to have zero interfaces attached to the switch, meaning that traffic flows through the box, rather than to the box as a destination.
So when creating subordinate interfaces using Macvlan, we get most of the bridging behavior as bridge+veth but the Macvlan implementation in the kernel means that outbound traffic from a subordinate interface always get put onto the outbound queue of the parent interface. This makes it impossible for a subordinate interface to exchange traffic with the host itself, by design. Had they chosen to go the extra mile, they would have just reinvented a version of bridge+veth that is excessively niche.
We also need to discuss the behavior of Docker networks. Similar to Kubernetes, containers managed by Docker mandate having IP connectivity (Layer 3). But whereas Kubernetes will not start a container unless an IPAM (IP Address Management) plugin explicitly provides an IP address, Docker's legacy behavior is to always generate a random IP address from a default range, unless given an IP explicitly. So even though bridge+veth or Macvlan will imbue Layer 2 connectivity to a DHCP server to obtain an IP address, Docker is eager to provide an IP, just so the container has one from the very start. The distinction between Docker and Kubernetes+Calico is thus one of actual utility: by getting an address from Calico's IPAM, Kubernetes knows that the address will actual work for networking, because Calico also creates/manages a network. Whereas Docker has no problem assigning an IP but not actually checking if this IP can be used on that network; it's almost a pro-forma exercise.
I will say this about early Docker: although they led the charge for making containers useful, how they implemented networking was very strange and led to a whole class of engineers who now have a deep misunderstanding of how real networks operate, and that only causes confusion when scaling up to orchestrated container frameworks like Kubernetes that depend on rigorous understanding of networking and Linux implementations. But all the same, Docker was more interested in getting things working without external dependencies like DHCP servers, so there's some sense in mandating an IP locally, perhaps because they didn't yet envision that containers would talk to the physical network.
The plugin that you mentioned operates by requesting a DHCP-assigned address for each container, but within the Docker runtime. And once it obtains that address, it then statically assigns it to the container. So from the container's perspective, it's just getting an IP assigned to it, not aware that DHCP has happened at all. The plugin is thus responsible for renewing that IP periodically. It's a kludge to satisfy Docker's networking requirements while still using DHCP-assigned addresses. But Docker just doesn't play well with Layer 2 physical networks, because otherwise the responsibility for running the DHCP client would fall to the containers; some containers might not even have a DHCP client to run.
If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know!
Sadly, there just isn't a really good way to do this within Docker, and it's not the kernel's fault. Other container runtimes like containerd -- which relies wholly on the standard CNI plugins and thus doesn't have Docker's networking footguns -- have no problem with containers running their own DHCP client on a bridged network. But for any container manager to handle DHCP assignment without the container's cooperation always leads to the same kludge as what Docker did. And that's probably why no major container manager does that natively; it's hard to solve.
I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.
Since your containers were able to get their own DHCP addresses from a bridged network in Incus, can you still run the DHCP client on those containers to override Docker's randomly-assigned local IP address? You'd have to use the bridge network driver in Docker, since you also want host-container traffic to work and we know Macvlan won't do that. But even this is a delicate solution, since if DHCP fails to assign an address, then your container still has the Docker-assigned address but it won't be usable on the bridged network.
The best solution I've seen for containers on DHCP-assigned networks is to not use DHCP assignment at all. Instead, part of the IP subnet is carved out, a region which is dedicated only for containers. So in a home IPv4 network like 192.168.69.0/24, the DHCP server would be restricted to only assigning 192.168.69.2 through 192.168.69.127, and then Docker would be allowed to allocate the addresses from 192.168.69.128 to 192.168.69.254 however it wants, with a subnet mask of 255.255.255.0. This mask allows containers to speak directly to addresses in the entire 192.168.69.0/24 range, which includes the rest of the network. The other physical hosts do the same, allowing them to connect to containers.
This neatly avoids interacting with the DHCP server, but at a loss of central management and it splits the allocatable addresses into smaller parts, potentially causing exhaustion in one side while the other has spare addresses. Yet another reason to adopt IPv6 as the standard for containers, but I digress. For Kubernetes and similar orchestration frameworks, DHCP isn't even considered since the orchestrator must have full internal authority to assign addresses with its chosen IPAM plugin.
TL;DR: if your containers are like mini VMs, DHCP assignment is doable. But if they're pre-packaged appliances, then only sadness results when trying to use DHCP.
All it did was a lookup into a fairly sparse array. All the kerfuffle about it was unduly placed.
I want to make sure I've understood your initial configuration correctly, as well as what you've tried.
In the original setup, you have eth0 as the interface to the rest of your network, and eth0 obtains a DHCP-assigned address from the DHCP server. Against eth0, you created a bridge interface br0, and your host also obtains a DHCP-assigned address in br0. Then in Incus, you created a Macvlan network against br0, such that each containers against this network will be assigned a random MAC, and all the container Ethernet frames will be bridged to br0, which in-turn bridges to eth0. In this way, all containers can each receive a DHCP-assigned address. Also, each container can send traffic to the br0 IP address, to access services running on the host. Do I have that right?
For your Docker attempt, it looks like you created a Docker network using the Macvlan driver, but it wasn't clear to me if the parent interface here was eth0 or br0, if you still have br0. When you say "I have MACVLAN working", can you describe which aspect is working? Unique MAC assignment? Bridged traffic to/from the containers or the network?
I'm not very familiar with Incus, but I'm entirely in the dark about this shoddy plugin you mentioned for DHCP and Macvlan to work. So far as I'm aware, modern Docker Engine uses the CNI plugins when creating networks, so the "-d macvlan" parameter specifies which CNI plugin will load. Since this would all be at Layer 2, I don't see why a plugin is needed to support DHCP -- v4 or v6? -- traffic.
And the host cannot contact the container due to the MACVLAN method
Correct, but this is remedied by what's to follow...
Can I make another bridge device off of br0 and bind to that one host-like?
Yes, this post seems to do exactly that: https://kcore.org/2020/08/18/macvlan-host-access/
I can always put a Docker/podman inside of an Incus container, but I'd like to avoid onioning if possible.
I think you're right to avoid multiple container management tools, if simply because it's generally unnecessary. Although it kinda looks like Incus is more akin to Proxmox, in that it supports managing VMs and containers, whereas Podman and Docker only manage containers, which is further still distinct from the container runtime (eg CRI-O, containerd, Docker Engine (which uses containerd under the hood)).
I once had the mispleasure to face a Bash script that was 35 lines tall but over 800 columns wide. The bulk of it was a two-dimensional array -- or rather, a behemoth that behaved like an array of arrays -- with way, way too many fields.
If that wasn't bad enough, my code review to essentially rotate the table 90 degrees was rejected because -- and I kid you not -- the change was unreviewable in any of our tools and thus deemed too risky to change. /facepalm
The gall of some people.
Movies would have people believe that the jets are there to shoot down the errant jet. During the Cold War, this was entirely plausible and did happen. But more commonly, when a fighter jet is sent to intercept an unknown aircraft -- perhaps one that has entered restricted or prohibited airspace -- it may be just to have eyes on the situation.
Airspace is huge. The vastness of the air is like the vastness of the sea. Sometimes it's an advantage because there's fewer things to hit. But on the flip side, if an aircraft needs assistance, there might not be anyone for many miles in any direction. As for what an assisting fighter jet can do, the first is to establish navigational accuracy. History has shown that airplanes can get lost, and sometimes unfortunately end up hitting mountains or running into known obstacles or weather. A second aircraft can confirm the first aircraft's position, since two separate aircraft having navigational problems is exceptionally rare.
The next thing is having eyes on the outside of the aircraft. Things like a damaged engine on a jetliner aren't visible to the pilots, but there's a chance the passengers or cabin crew can look. But damage to a rudder is impossible to see from inside the aircraft; I'm not yet aware of a commercial aircraft equipped with a tail-viewing camera. Checking the condition of the landing gear is also valuable information, if a jetliner has taken damage but still aloft.
Finally, if it should come to it, an assisting aircraft can be the pilot's eyes, if for some reason the pilots can no longer see out their windscreen. At this point, the flight may already be close to the end but it may help avoid additional casualties on the ground. I'm reminded of the flight where volcanic ash sandblasted the windshield, or when a cargo jet had a fire onboard which filled the cockpit with thick smoke.
To be clear, neither incident was aided by fighter jets, but having an external set of eyes to give directions would have made things a little bit easier for the pilots. Other aircraft besides fighter jets can provide assistance, such as any helicopters or private pilots in the area. But of course, fighter jets are on-standby and can get to a scene very fast.
Permanently Deleted
Absolutely. An example of a malicious collision would be to request the file with the SHA-1 of 38762cf7f55934b34d179ae6a4c80cadccbb7f0a. But... there's two of them here.
MD5 is so broken that its former status as a cryptographic hash function has been stripped. And efforts are underway to replace SHA-1 where it's used, since although it takes some prerequisites to intentionally create a SHA-1 collision today, it's worth remembering that "attacks always get better, they never get worse".
Permanently Deleted
provide the hash of an arbitrarily large file and retrieve it from the network
I sense an XY Problem scenario. Can you explain what you're seeking to ultimately build and what requirements you have?
Does the solution need to be distributed? Does the retrieval need to complete ASAP or can wait until data becomes available? What sort of reliability/availability does this need? If only certain hash algorithms can be supported, which ones do you need and why?
I ask this because the answer will be drastically different if you're building the content distribution system for a small video game versus building the successor to Kim Dotcom's Mega file-sharing service.
Ctrl Alt Speech: a podcast by TechDirt's Mike Masnick (who coined the term "Streisand Effect") about online speech and content regulation, and how it's not at all a simple nor straightforward task.
Feed: https://feeds.buzzsprout.com/2315966.rss
Soatok's Dhole Moments: a blog on cryptography and computer security, with in-depth algorithm discussions interspersed with entertaining furry art. SFW. Also find Soatok on Mastodon.
Feed: https://soatok.blog/feed/
Molly White's Citation Needed newsletter: critiques of cryptocurrency, regulations, policies, and news. Available as a podcast too. Also find Molly White on Mastodon. She also has a site dedicated to cryptocurrency disasters.
It's for this reason that I sometimes spell out the units as: 1000 GBytes/sec or 1000 Gbits/sec. In my book, Byte is always "big B" and bit is always "little b", and then spelling it out makes it unambiguous in writing.