GitLab takes down Nintendo Switch emulator suyu due to the DMCA
Atemu @ Atemu @lemmy.ml Posts 63Comments 1,439Joined 5 yr. ago

Yes. Low power draws add up. 5W here 10W there and you're already looking at >3€ per month.
Your home router probably has no clue where that is, so it goes to its upstream router and asks if they know, this process repeats until one figures it out and you get a route.
That's not how that works. The router merely sends the packet to the next directly connected router.
Let's take a simplified example:
If you were in the middle of bumfuck nowhere, USA and wanted to send a packet to Kyouto, Japan, your router would send the packet to another router it's connected to on the west coast. From your router's perspective, that's it; it just sends it over and never "thinks" about that packet again.
The router on the west coast receives the packet, looks at the headers, sees that its supposed to go to Japan and sends it over a link to Hawaii.
The router in Hawaii again looks at the packet, sees that it's supposed to go to Japan and sends it over its link to Toukyou.
The router in Toukyou then sends it over its link to Kyouto and it'll be locally routed further to the exact host from there but you get the idea.
This is generally how IP routing works; always one hop to the next.
What I haven't explained is how your router knows that it can reach Kyouto via the west coast or how the west coast knows that it can reach Kyouto via Hawaii.
This is where routing protocols come in. You can look up how exactly these work in detail but what's important is their purpose: Build a "map" of the internet which you can look at to tell which way to send a packet at each intersection depending on its destination.
In operation, each router then simply looks at the one intersection it represents on the "map" and can then decide which way (link) to send each individual packet over.
The "map" (routing table) is continuously updated as conditions change.
Never at any point do routers establish a fixed route from one point to another or anything resembling a connection; the internet protocol is explicitly connectionless.
in reality, there will be a few local routers between the gateway router sitting in your home and the big router that has a big link to the west coast
This isn't about copyright, it's about whether the software's purpose is to break DRM. Ninty argued that Yuzu's primary purpose is to enable copyright infringement which is forbidden under the DMCA; both infringement of course but also even just building tools to enable it. The latter is the critical (and IMHO insane) part.
Now, all of that is obviously BS but Ninty SLAPPed Yuzu to death, so it doesn't matter what's just or unjust; they win. God bless corporate America.
I am ashamed of GitLab.
Don't be. Gitlab has to comply with the law.
It's the law that's broken, not Gitlab.
It’s absolutely ridiculous they took it down even though Nintendo didn’t DMCA the Suyu project directly.
Um, no. If shitty corpo X (ab)uses the DMCA to send you a takedown notice for some project and you also host a fork of the same project, you must take down the fork too.
"You see, while this might be the exact same code, the name is totally different, so we don't have to take it down!" will not hold up in court.
Whether the DMCA request is valid or not is an entirely separate question. You must still comply or open yourself up to legal liabilities.
The process to object to the validity of the request is included in the screenshot.
Depends on how much of our needs would be covered. Not needing to work to survive is different from not needing to work to live a comfortable life which is again different from living a luxurious life.
Note that all of this is in the context of backups; duplicates for the purpose of restoring the originals in case something happens to them. Though it is at least possible to use an index cold storage system like what I describe for more frequent access, I would find that very inconvenient for "hot" data.
how would you use an index’d storage base if the drives weren’t connected
You take a look at your index where the data you need is located, connect to that singular location (i.e. plug in a drive) and then copy it into the place it went missing from.
The difference is that, with an Index, you gain granularity. If you only need file A, you don't need to connect all 12 backup drives, just the one that has file A on it.
Could you upload the output of systemd-analyze plot
?
The problem is that i didn’t mean to write to the hdd, but to a usb stick; i typed the wrong letter out of habit from the old pc.
For that issue, I recommend never using unstable device names and always using /dev/disk/by-id/
.
As for the hard drives, I’m already trying to do that, for bigger files i just break them up with split. I’m just waiting until i have enough disks to do that.
I'd highly recommend to start backing up the most important data ASAP rather than waiting to be able to back up all data.
That would require all of those disks to be connected at once which is a logistical nightmare. It would be hard with modern drives already but also consider that we're talking IDE drives here; it's hard enough to connect one of them to a modern system, let alone 12 simultaneously.
With an Index, you also gain the ability to lose and restore partial data. With a RAID array it's all or nothing; requiring wasting a bunch of space for being able to restore everything at once. Using an index, you can simply check which data was lost and prepare another copy of that data on a spare drive.
I feel it was a direct reply to the comment above.
At no point did it mention livepatching.
Dinosaurs don’t want to give up their extended LTS kernels because upgrading is a hassle and often requires rebooting, occasionally to a bad state.
No, Dinosaurs want LTS because it's stable; it's in the name.
You can't have your proprietary shitware kernel module in any kernel other than the ABI it's made for. You can't run your proprietary legacy service heap of crap on newer kernels where the kernel APIs function slightly differently.
how can you bring your userbase forward so you don’t have to keep slapping security patches onto an ancient kernel?
That still has nothing to do with livepatching.
You probably could. Though I don't see the point in powering a home server over PoE.
A random SBC in the closet? WAP? Sure. Not a home server though.
10% worse efficiency > no refrigerator
It depends on whether the game wants that or not; it must explicitly opt-in to that. If it wasn't Steam offering their extremely nonintrusive DRM, those games would likely use more intrusive DRM systems instead such as their own launchers or worse.
It also somehow doesn't feel right to call it "DRM" since it has none of the downsides of "traditional" DRM systems: It works offline, it doesn't cause performance issues and doesn't get in your way (at least it never even once got in mine).
I'd much rather launch the games through Steam anyways though. Do you manually open the games' locations and then open their executables or what? A nice GUI with favourites, friends and a big "play" button is just a lot better IMHO.
Kernel livepatching is super niche and I don't see what it has to do with the topic at hand.
This is not true. As soon as the key is wiped from the TPM-like thingy, any data left on the flash is unrecoverable.
I’m trying to do that; but all of the newer drives i have are being used in machines, while the ones that arent connected to anything are old 80gb ide drives, so they aren’t really practical to backup 1tb of data on.
It's possible to make that work; through discipline and mechanism.
You'd need like 12 of them but if you'd carve your data into <80GB chunks, you could store every chunk onto a separate scrap drive and thereby back up 1TB of data.
Individual files >80GB are a bit more tricky but can also be handled by splitting them into parts.
What such a system requires is rigorous documentation where stuff is; an index. I use git-annex for this purpose which comes with many mechanisms to aid this sort of setup but it's quite a beast in terms of complexity. You could do every important thing it does manually without unreasonable effort through discipline.
For the most part i prevented myself from doing the same mistake again by adding a 1gb swap partition at the beginning of the disk, so it doesn’t immediatly kill the partition if i mess up again.
Another good practice is to attempt any changes on a test model. You'd create a sparse test image (truncate -s 1TB disk.img
), mount via loopback and apply the same partition and filesystem layout that your actual disk has. Then you first attempt any changes you plan to do on that loopback device and then verify its filesystems still work.
If you're not looking to question your views, then ignore people like me who do. Though as a general rule of thumb, not questioning your own views may not be the best strategy in life but you do you.
In the screenshot it says that Gitlab received a DMCA request.