Skip Navigation

Posts
1
Comments
225
Joined
2 yr. ago

  • I think you misunderstood what I was saying. I'm not saying wayland magically makes everything secure. I'm saying that wayland allows secure solutions. Let's put it simply

    • Wayland "ignores" all the issues if that's what you want to call it
    • Xorg breaks attempts to solve these issues, which is much worse than "ignoring" them

    You mentioned apps having full access to my home directory. Apps don't have access to my home directory if I run them in a sandbox. But using a sandbox to protect my SSH keys or firefox session cookies is pointless if the sandboxed app can just grab my login details as I type them and do the same or more harm as they would if they had the contents of my home directory. Using a sandbox is only beneficial on Wayland. You could potentially use nested Xorg sessions for everything but that's more overhead, will introduce all the same problems as Wayland (screen capture/global shortcuts/etc), while also having none of the Wayland benefits.

    And given how garbage the modern state of sandboxing still is

    I'm not talking about "the current state" or any particular tool. One protocol supports sandboxing cleanly and the other doesn't. You might have noticed that display server protocols are hard to replace so they should support what we want, not only what we have right now. If you don't see a difference between not having a good way to do something right now versus not allowing for the possibility to do something in a good way ever, let's just end the discussion here. If those are the same to you no argument or explanation matters.

    If you actual want to solve this issue you have to provide secure means to do all those task.

    Yes that exactly the point. Proposed protocols for these features allow a secure implementation to be configured. You would have a DE that asks you for every single permission an app requests. You don't automatically get a secure implementation, but it is possible. There might be issues with the wayland protocol development processes or lack of interest/manpower by DE/WM developers, or many other things that lead to subpar or missing solutions to current issues, but they are not inherent and unsolvable issues of the protocol.

  • In theory, yeah, a bit more control over what apps can and can’t access would be nice. In reality, it doesn’t really matter, since any malicious app can do more than enough damage even without having access to the Xserver.

    Complete nonsense. Moving away from a protocol that doesn't allow every single application to log all inputs isn't "a bit more control over what apps can and can't access". We're switching from a protocol where isolation is impossible to one where it is.

    The notion that if you can't stop every possible attack with a sandbox then you should not bother to stop any of them is also ridiculous. A lot of malware is unsophisticated and low effort. Not bothering to patch gaping security holes just because there might be malware out there that gets around a sandbox is like leaving all your valuable stuff on the sidewalk outside your house because a good thief would have been able to break in anyway. You're free to do so but you'll never convince me to do it.

    The solution is to not run malicious code

    Another mischaracterization of the situation. People don't go around deliberately running "malicious code". But almost everyone runs a huge amount of dubious code. Just playing games, a very common use case, means running millions of lines of proprietary code written by companies who couldn't care less for your security or privacy, or in some cases are actively trying to get your private data. Most games have some online component and many even expose you to unmoderated inputs from online strangers. Sandboxing just steam and your browser is a huge step in reducing the amount of exploitable vulnerabilities you are exposed to. But that's all pointless if every app can spy on your every input.

    Xnest, Xephyr and X11 protocol proxy have also been around for a while, X11 doesn’t prevent you from doing isolation.

    What's the point then of a server-client architecture if I end up starting a dedicated server for every application? It might be possible to have isolation this way but it is obviously patched on top of the flawed design that didn't account for isolation to begin with. Doing it this way will break all the same stuff that Wayland breaks anyway so it's not a better approach in any way.

  • Disabling screen tearing for two or more monitors with different refresh rates is as far as I know impossible within the X11 protocol. This is especially annoying for high-refresh rate VRR monitors which could be tearfree with negligible cost in responsiveness.

    You also can't prevent processes from manipulating each others inputs/outputs. An X11 system can never have meaningful sandboxing because of this. Maybe you could run a new tweaked and sandboxed X server for every process but at that point you're working against the protocol's fundamental design.

  • This would be an incredible QoL improvement for gaming, at least until all compositors reach feature parity. Imagine using your preferred compositor for everyday tasks, quick-switching to another one that supports VRR and/or HDR while gaming, and then back again, all without logging out and logging in again.

  • It's not about "accomplishing" something that couldn't be done with a database. It's about making these items tradeable on a platform that doesn't belong to a single entity, which is often the original creator of the item you want to sell. As good as the Steam marketplace might be for some people, every single sale pays a tax to Valve, and the terms could change at any moment with no warning. The changes could be devastating for the value of your collectibles that you might have paid thousands of dollars for. This could not happen on any decentralized system. It could be something else that isn't NFTs but it would absolutely have to be decentralized. Anything centralized that "accomplishes the same thing" doesn't really accomplish the same thing.

    It's worth noting that this sort of market control would never be considered ok on any other market. Can you imagine a car manufacturer requiring every sale to go through them? Would you accept paying them a cut when you resell your car? Would you accept having to go through them even to transfer ownership of the car to a family member? If a car manufacturer tried to enforce such terms on a sale they would be called out for it and it would most likely be ruled to be unlawful. But nobody questions the implications of the same exact situation in a digital marketplace.

  • I know that zsh has the option to use vim-like keybindings if you're familiar with those.

  • It is copyright infringement. Nvidia (and everyone writing kernel modules) has to choose between:

    • using the GPL-covered parts of the kernel interface and sharing their own source code under the GPL (a free software license)
    • not using the GPL-covered parts of the kernel interface

    Remember that the kernel is maintained by volunteers and by engineers funded by/working for many companies, including Nvidia's direct competitors, and Nvidia is worth billions of dollars. Nvidia is incredibly obnoxious to infringe on the kernel's copyright. To me it is 100% the appropriate response to show them zero tolerance for their copyright infringement.

  • These are terrible sources. 3 random CVEs and opinions of randoms on the internet. The "sources" conflate arguments about systemd as an init system with the non-init parts and with criticisms of Poettering, and a lot of it is "this is bad" with no argument or, worse, incorrect arguments. If there is anything in there that actually proves something, link directly to it. I'm not going to shift through mountains of garbage to find it.

  • systemd is insecure, bloated, etc

    Citation needed

    If a distro that doesn't use systemd ends up booting much faster or being much easier to configure, maybe those are features you care about. But switching away from systemd in this case is merely an implementation detail. What you're really doing is moving from a distro to another one that serves you better.

    Otherwise, the choice of init system has very little impact to the average user. Maybe it's worth it to switch init systems if you hate the syntax of unit files and/or the interface of systemctl/journalctl and you use them often enough to warrant the effort. The people who want to use alternatives to systemd without having such a practical issue with it are doing so for philosophical reasons.

  • I don't think Linux literally waits for you to unmount the drive before it decides to write to it. It looks like that because the buffering is completely hidden from the user.

    For example say you want to transfer a few GB from your SSD to a slow USB drive. Let's say:

    • it takes about half a minute to read the data from the SSD
    • it takes ten minutes to write it to the USB
    • the data fits in the spare room you have in RAM at the moment

    In this scenario, the kernel will take half a minute to read the data into the RAM and then report that the file transfer is complete. Whatever program is being used will also report to the user that the transfer is complete. The kernel should have already started writing to the drive as soon as the data started being read into the RAM, so it should take another nine and a half minutes to complete the transfer in the background.

    So if you unmount at that point, you will have to wait nine and a half minutes. But if you leave it running and try to unmount ten minutes later it should be close to instant. That's because the kernel kept on writing in the background and was not waiting for you to unmount the drive in order to commit the writes.

    I'm not sure but I think on Windows the file manager is aware of the buffering so this doesn't happen, at least not for so long. But I think you can still end up with corrupted files if you don't safely remove it.

  • It's not just you. The DEs themselves generally don't mess with each other much, beyond possibly messing with each other's settings. But I've seen the the package post installation scripts cause issues. So it depends on the distro I guess.

  • The same people saying that this is good are also mocking X and threads for losing users.

    These are not comparable. X and threads are businesses which maximize their profits by making their platform as big as possible. That is not true for Lemmy and even if it were, the average user does not care about the platform's profits. So you can in fact make fun of the failures of big companies while being happy being part of a much smaller platform.

  • This man page is thirty pages long and has in depth descriptions of all fifty switches in alphabetical order, but all i want is an example on how to do a very simple, common thing with it. And of course, all commands have their own syntax (of course windows isn’t any better, outside of Powershell).

    Yes man is intended to be a manual so it's understandably bad at being a cheatsheet. Check out tldr or tealdeer. They are similar but I found tealdeer to be much faster for me. Also try a shell with better completion than bash, like zsh or fish. Having better completion will sometimes sidestep the need for a cheatsheet altogether.

    Don’t curl to bash, it’s dangerous.

    You can curl the file normally, inspect, and then run it with bash. All the safety issues of running stuff you found online still apply (is the source trusted?), but you don't get the issues that arise specifically from piping curl to bash. But most applications don't need you to curl | bash in the first time because of package managers.

  • Switching between Windows and Ubuntu led to a weird time difference on Window’s part (it still does)

    Google how to set your windows clock to UTC. You can maybe do the reverse and set linux to localtime, but I find it much cleaner that the system clock is in UTC as it's an objective and stable standard, unlike localtime which can change with daylight savings or if your move.

  • Let’s remove the context of AI altogether.

    Yeah sure if you do that then you can say anything. But the context is crucial. Imagine that you could prove in court that I went down to the public library with a list that read "Books I want to read for the express purpose of mimicking, and that I get nothing else out of", and on that list was your book. Imagine you had me on tape saying that for me writing is not a creative expression of myself, but rather I am always trying to find the word that the authors I have studied would use. Now that's getting closer to the context of AI. I don't know why you think you would need me to sell verbatim copies of your book to have a good case against me. Just a few passages should suffice given my shady and well-documented intentions.

    Well that's basically what LLMs look like to me.

  • But what an LLM does meets your listed definition of transformative as well

    No it doesn't. Sometimes the output is used in completely different ways but sometimes it is a direct substitute. The most obvious example is when it is writing code that the user intends to incorporate into their work. The output is not transformative by this definition as it serves the same purpose as the original works and adds no new value, except stripping away the copyright of course.

    everything it outputs is completely original

    [citation needed]

    that you can’t use to reconstitute the original work

    Who cares? That has never been the basis for copyright infringement. For example, as far as I know I can't make and sell a doll that looks like Mickey Mouse from Steamboat Willie. It should be considered transformative work. A doll has nothing to do with the cartoon. It provides a completely different sort of value. It is not even close to being a direct copy or able to reconstitute the original. And yet, as far as I know I am not allowed to do it, and even if I am, I won't risk going to court against Disney to find out. The fear alone has made sure that we mere mortals cannot copy and transform even the smallest parts of copyrighted works owned by big companies.

    I would find it hard to believe that if there is a Supreme Court ruling which finds digitalizing copyrighted material in a database is fair use and not derivative work

    Which case are you citing? Context matters. LLMs aren't just a database. They are also a frontend to extract the data from these databases, that is being heavily marketed and sold to people who might otherwise have bought the original works instead.

    The lossy compression is also irrelevant, otherwise literally every pirated movie/series release would be legal. How lossy is it even? How would you measure it? I've seen github copilot spit out verbatim copies of code. I'm pretty sure that if I ask ChatGPT to recite me a very well known poem it will also be a verbatim copy. So there are at least some works that are included completely losslessly. Which ones? No one knows and that's a big problem.

  • "Transformative" in this context does not mean simply not identical to the source material. It has to serve a different purpose and to provide additional value that cannot be derived from the original.

    The summary that they talk about in the article is a bad example for a lawsuit because it is indeed transformative. A summary provides a different sort of value than the original work. However if the same LLM writes a book based on the books used as training data, then it is definitely not an open and shut case whether this is transformative.

  • Not a lawyer so I can't be sure. To my understanding a summary of a work is not a violation of copyright because the summary is transformative (serves a completely different purpose to the original work). But you probably can't copy someone else's summary, because now you are making a derivative that serves the same purpose as the original.

    So here are the issues with LLMs in this regard:

    • LLMs have been shown to produce verbatim or almost-verbatim copies of their training data
    • LLMs can't figure out where their output came from so they can't tell their user whether the output closely matches any existing work, and if it does what license it is distributed under
    • You can argue that by its nature, an LLM is only ever producing derivative works of its training data, even if they are not the verbatim or almost-verbatim copies I already mentioned