Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AC
apt_install_coffee @ apt_install_coffee @lemmy.ml
Posts
0
Comments
131
Joined
3 yr. ago

  • For the same reason spoken languages often have semantic structures that make a literal translation often cumbersome and incorrect, translating nontrivial code from one language into another without being a near expert in both langauges, as well as being an expert in the project in question, can lead to differences in behaviour varying from "it crashes and takes down the OS with it", to "it performs worse".

  • A rather overly simplistic view of filesystem design.

    More complex data structures are harder to optimise for pretty much all operations, but I'd suggest the overwhelmingly most important metric for performance is development time.

  • Yes, but note that neither the Linux foundation nor OpenZFS are going to put themselves in legal risk on the word of a stack exchange comment, no matter who it's from. Even if their legal teams all have no issue, Oracle has a reputation for being litigious and the fact that they haven't resolved the issue once and for all despite the fact they could suggest they're keeping the possibility of litigation in their back pocket (regardless of if such a case would have merit).

    Canonical has said they don't think there is an issue and put their money where their mouth was, but they are one of very few to do so.

  • Brand new anything will not show up with amazing performance, because the primary focus is correctness and features secondary.

    Premature optimisation could kill a project's maintainability; wait a few years. Even then, despite Ken's optimism I'm not certain we'll see performance beating a good non-cow filesystem; XFS and EXT4 have been eeking out performance for many years.

  • I'll also tac on that when you use cloud storage, what do you think your stuff is stored on at the end of the day? Sure as shit not Bcachefs yet, but it's more likely than not on some netapp appliance for the same features that Bcachefs is developing.

  • Did the citizens of that country take the loan? No

    Did they benefit at all from the loan? No

    Did the world bank make any effort to ensure the above were answered 'yes'? No

    When you make a leveraged loan are you supposed to be guaranteed that the it was risk free? No

    If leveraged loans could be made risk-free 'breal your legs' style the way the world bank does to countries, banks would be offering loans to every punter who wanted to bet on the dogs.

  • In addition to the comment on the mentioned better hardware flexibility, I've seen really interesting features like defining compression & deduplication in a granular way, even to the point of having a compression algo when you first write data, and then a different more expensive one when your computer is idle.

  • This is actually a feature that enterprise SAN solutions have had for a while, being able choose your level of redundancy & performance at a file level is extremely useful for minimising downtime and not replicating ephemeral data.

    Most filesystem features are not for the average user who has their data replicated in a cloud service; they're for businesses where this flexibility saves a lot of money.

  • Regarding 1: if you open up dmesg after it happens and you see an error regarding "No edid read", your GPU is having a hard time automatically getting the monitor's edid over display port. My 7800xt has this issue.

    If your monitor setup doesn't change much, you can manually set the edid on a per output basis. Here is a good guide.

    Also, regarding 3: you may need to set your amdgpu feature mask in your kernel parameters.

  • I have a 7800XT on Linux and I want to point out that I still run into their "drm_fec_ready" and "no edid read" bugs every day.

    amdgpu is miles ahead of what NVIDIA is offering, but it is still a GPU driver on a second class platform. Do not expect a flawless experience on bleeding edge hardware.

  • Kernel modules don't have to be open source provided they follow certain rules like not using gpl only symbols. This is the same reason you can use an NVIDIA driver.

    Its not enforced so much by law as what the fsf and Linux foundation can prove and are willing to pursue; going after a company that size is expensive, especially when they're a Linux foundation partner. A lot of major Linux foundation partners are actively breaking the GPL.

  • I work with SoC suppliers, including Qualcomm and can confirm; you need to sign an NDA to get a highly patched old orphaned kernel, often with drivers that are provided only as precompiled binaries, preventing you updating the kernel yourself.

    If you want that source code, you need to also pay a lot of money yearly to be a Qualcomm partner and even then you still might not have access to the sources for all the binaries you use. Even when you do get the sources, don't expect them to be updated for new kernel compatibility; you've gotta do that yourself.

    Many other manufacturers do this as well, but few are as bad. The environment is getting better, but it seems to be a feature that many large manufacturers feel they can live without.

  • If you're messing with ACLs I'm not sure deduplication will help you much; I believe (not much experience with reflinks) the dedup checksum will include the metadata, so changing ACLs might ruin any benefit. Even if you don't change the ACLs, as soon as somebody updates a game, it's checksum will change and won't converge back when everyone else updates.

    Even hardlinks preserve the ACL... Maybe symlinks to the folder containing the game's data, then the symlinks could have different ACLs?

  • I actually found the opposite with my steam library; on ZFS with ZSTD I only saw a ratio of 1.1 for steamapps, not that there's really any meaningful performance penalty for compressing it.