Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LA
Posts
0
Comments
309
Joined
2 yr. ago

  • Not really your fault if people fall to stereotypes no?

    On the other hand, I think the majority of women doesn't actually care that much about getting their cervix hammered during intercourse. I know it sounds crazy. And they don't like oral sex because men's tongues are so long either.

  • I was under the impression this is already the norm for network equipment because the vast amount of data is no longer processable by the kernel. In fairness though that equipment most likely doesn't really consume the data but rather just forwards.

  • The issue with X11 is that it got big and bloated, and unmaintainable, containing useless code. None of these desktops use that useless code, still in X from the time where 20 machines were all connected to 1 mainframe.

    I don't think that is very fair to say. From what I heard, the X.org code as in the implementation of the protocol and its extensions is actually of very high quality, so it can be maintained. The problem as you correctly describe is the design and the resulting protocol with its extensions which don't fit modern needs.

    It's also not like theoretically multiple X11 servers implementing the X Window System couldn't have existed simultaneously, it was just too much effort regarding the complexity of the protocol. In fact, for a short time, two different implementations existed: XFree86 and the X.org server. Granted the latter was a fork of the former, but they were independent projects during the time of their coexistence.

  • When I opened GitHub this morning, Godot was the #1 trending repository. So yeah.

    Everyone with half a brain could have seen something like this happen from a mile away but yet here we are. If you lock yourself in with a proprietary vendor, they can screw you over later. See also Reddit. And if I were to venture a guess the same will happen with Discord.

  • I guess yay is an easy solution, but it's not very clean, at least from what I remember and just checked. It might be fine for single machines, but since it doesn't build in a clean chroot, you can never be sure that the claimed dependencies are actually complete, and as such, a package built with yay on one machine doesn't necessarily work on another, even for the same processor type (portability might not be possible anyways if you build with -march=native). It also doesn't handle automatic rebuilds for necessary .so-bumps, but this is generally non-trivial to solve AFAIK.

    When I still used Arch exclusively, I had my own repository set up via aurutils on a remote server, granted this doesn't handle .so-bumps by itself either but at least you get somewhat clean packages every time, and you'll start to notice how many AUR packages are actually broken, with the most common occurrence being git not listed as a makedepends for packages that retrieve their data via Git because everyone using the AUR has it installed anyways to access anything on there. Granted this is a non-issue in practice but it's not the only one.

  • Try an Ubuntu dist-upgrade with Nvidia drivers installed and report back.

    It's a bit of cheating because Ubuntu supports dist-upgrade only if you use nothing but universe. Granted it might work still but if it doesn't your paid support will just tell you to restore a backup and try with universe packages only. At least that's what I heard. But it's been long since I used Ubuntu and maybe the situation is better nowadays. Back then, this screwed be over big time.

  • Yeah, Arch is one of the better distributions regarding Nvidia support. Props to the team that makes it work. Last I've heard this is not the norm though. If you get a new card and all you care about is raster performance there's no need to go for anything but AMD. Granted I believe Nvidia does everything else better (Raytracing / energy efficiency / compute ) or at least not worse but the open driver experience is just pure bliss.

    oh and fuck Nvidia. I really hope the changes to EXPORT_GPL_ONLY in 6.6 screw them over

  • And while they can use older kernels for now (because they're still supported), they won't be able to forever.

    It might even happen that the change gets back ported upstream; that would mean that every supported kernel has the changes in place, regardless of version.

  • There's an interesting discussion about the whole topic on the Phoronix forums about this. Some people claim that removing them and Nvidia's current behavior is a DMCA violation:

    1. The kernel includes IP only licensed under GPLv2.
    2. While a module linked against the kernel isn't necessarily a derived work which in turn would need to be licensed GPLv2 as well, there are specific interfaces that are meant for internal use and by their very nature would make your work derived if using them. These are the interfaces marked EXPORT_GPL_ONLY.
    3. Using these interfaces with a module not licensed GPLv2, you taint the kernel and violate the licensing.
    4. Removing the check, you aren't necessarily yet violating GPLv2, but you're removing a technical protection measure which is a violation of the DMCA.

    It also raises the question why you'd remove checks that only prevent a possible GPLv2 violation if you're not violating GPLv2 anyways as Nvidia claims.

  • USE flags have some inherent "issues" or rather downsides that make them non-options for some distributions.

    First, they create a much larger number of package variants, simplified 2^(number of USE flags applicable to package). This is fine if you don't want to supply binaries to your users. Second, the gain they bring to the average workstation is rather insignificant today. Users usually want all functionality available and not save 30 kb of RAM and then suddenly have to rebuild world because they find out they're missing a USE flag that they suddenly need. Also, providing any kind of support for a system where the user doesn't run the binaries you provided and maybe even changed dependencies (e.g. libressl instead of openssl) is probably impossible.

    It's very cool stuff if you want to build a system very specific to your needs and hardware, and I do believe that NixOS could have profited in some parts from it, but I don't have specific ideas.

  • It's honestly cool stuff, but I don't think a lot of people actually actively want that.

    I tried something similar with Exherbo once but couldn't get it to boot after installation, I don't remember the specifics but I tried using libressl instead of openssl.

  • For me, it was rather the opposite: when dropping IPv6 packets, applications would often hang and behave weirdly. Disabling IPv6 completely would mean they'd stop trying to do anything on IPv6 and function well.

  • I recently switched to NixOS and GNU Guix was also a possible option, and while in retrospective, I can agree with your points, there were two things NixOS does that I want that Guix doesn't offer:

    1. Non-free packages (though I guess I could have used Nonguix)
    2. An option for Secure Boot

    I don't think the assessment "it's badly designed" is fair and your conclusion

    Years and years of technical issues plague the project and there seems to be little interest in actually resolving these issues. Guix is comparatively much newer, yet the UX is much better and there are constant improvements in many areas. It also has the advantage of being built from the ground up with a clear design mind."

    (emphasis mine) is misleading ; it's not better despite being newer, it's better because it is newer and was able to learn from Nix and improve upon it. Also what would you call the Nix whitepaper if not the design behind Nix?

  • Some issues just stem from Ubuntu itself though. Granted those aren't all and maybe not even a big portion, but they do exist. I had huge issues upgrading Ubuntu back when I used it if Nvidia drivers were installed. On Arch, it was trivial. At work, we have VMs running Ubuntu 20.04 and we were advised not to upgrade because they no longer work correctly after upgrading (these are special VMs not in our company network for testing and stuff under administration of the user with only the initial image rolled out centrally).

    I can see why a new user might be attracted to using Ubuntu, and without trying to talk anyone down, my reasoning was more something for educated users who make an informed decision on which distribution to run, which is not something you can ask from a novice.

    Also, while I know this isn't the best metric, Debian currently ranks above Ubuntu on Distrowatch, so interest is there, which is nice; personally I wouldn't recommend anything Debian based to experienced users but also wouldn't explicitly warn against Debian either. I think their approach of a distribution is outdated, but they're a driving force behind some innovations like reproducible builds, so it kind of evens out.

  • For me the question is rather, what's the current raison d'être for Ubuntu if you're not looking for Debian with paid support?

    Granted it's been long since I've used it (I used it from 2005 or so until 2008 when I switched to Arch), but there's no really appealing quality for me there that I couldn't have with Debian. Apart from that, Canonical makes questionable decisions – snap, as others have mentioned, a total disaster in my opinion; Mir was another of their misadventures (later retrofitted into a Wayland compositor); upstart didn't turn out successful (though to give credit, it was an honest attempt at a new init system and lessons were learned); the LXD maintainer issue as of late leaves a sore taste in my mouth, plus they were always very community-unfriendly with their CLAs. And all this for what? Might as well use their upstream instead.