Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)UN
Posts
0
Comments
106
Joined
5 mo. ago

  • They don't have to explain anything to you

    Correct. We wont do business if one cannot give an explanation. One can write in the privacy policy that they collect all sorts of private information, but the kicker, for me, is often why.

    The vast majority of people who run into the Anubis setup will have no fucking clue what any of it means, nor give a shit about it. They just want to get to the content.

    One doesn't have to care about the Miranda warning, but its still read off to someone in case they do.

  • Its their responsibility to make clear the reason they require it.

    Something like Anubis does it well by adding a "Why am I seeing this?" section to their JavaScript challenge.

    You are seeing this because the administrator ...

    Anubis is a compromise. Anubis uses a Proof-of-Work scheme in ...

    Ultimately, this is a hack whose real purpose is to give a "good enough" placeholder solution so that more time can be spent on ...

    Please note that Anubis requires the use of modern JavaScript features that plugins like ...

    Sadly, you must enable JavaScript to get past this challenge. This is required because AI companies have ...

    If you require something, such as an account, to view the content. Simply add why.

  • Simply: Do the protections against someone taking your computer and installing a malicious program before/as your OS, or a program that has attained root on your machine and installs itself before/as your OS, matter enough to you to justify the increased risk of being locked out of your machine and the effort to set it up and understand it.

    If you don't understand and don't want to put in the effort to, then my advice would be to leave it off. Its simple, and the likelihood it saves you is probably very miniscule.

  • ...and this is the primary reason they cannot be supported directly by Graphene.

    This is phrased like a technical boundary. They are not supported because Graphene chooses not to support them. Not to say it would be easy, but they are making a choice to solely use Google's hardware.

    They don't have to.

  • My biggest gripe with flatpak is the fact it isn't sandboxed properly by default.

    I'm not referring to vendor-given privileges. Every flatpak, unless explicitly ran with the --sandbox option, has a hole in the sandbox to communicate with the portal. Even if you try to use flatseal to disallow it, it will still be silently allowed.

    This leads to a false sense of security. A notable issue I found is if you disallow network access to a flatpak, it can still talk to the portal and tell it to open a link in your browser. This allows it to communicate back to a server through your browser even though you disallowed it. Very terrible.

    Security should to be dead easy and difficult to mess up. The countless threads I've read on flatpak tell me the communication about flatpak's actual security has been quite terrible, and so it doesn't fit this category.

  • ... Alpine is designed to be friendly to corporations who want to lock down their devices and prevent you from modifying them.

    "Designed to" assumes intent. Alpine is absolutely designed to be Small, Simple, and Secure. Using busybox instead of the GNU coreutils is a means to this end. Using musl instead of glibc is a means to this end.

    On the about page they list why they use these tools. The licensing is not listed at all.

  • A threat model in which you don't trust the Linux Foundation and volunteers but do trust Microsoft.

    Its all about what you want to protect. If a security breach is worse for you on Linux than it is on Windows because of which party has the data, then for you, Windows might be more secure.

    Some people get confused because they think there is some objective measurable security rating one can apply to a system for every person. There isn't. We may use the same systems but have different threat models and thus rate the security different.

  • Privilege escalations always have to be granted by an upper-privilege process to a lower-privilege process.

    There is one general way this happens.

    Ex: root opens up a line of communication between it and a user, the user sends input to root, root mishandles it, it causes undesired behavior within the root process and can lead to bad things happening.

    All privilege escalation is two different privilege levels having some form of interaction. Crossing the security boundary. If you wish to limit this, you need to find the parts of the system that cross that boundary, like sudo[1], and remove those from your system.

    [1]: sudo is an SUID binary. That means, when you run it, it runs as root. This is a problem, because you as a process have some influence on code that executes within the program (code running as root).

  • secureblue is about as secure as Linux can get...

    Unless you have an unusual threat model, this statement is utter nonsense. I can run a kconfig stripped kernel with zero kernel modules and one userspace process that is completely audited and trusted, without the ability to spawn even other processes or talk to network (because the kernel lacks support for the IP stack).

    Secureblue might offer something significant when compared to other popular and easily usable tools, but if you compare it to the theoretical limit of Linux security, its not even comparable.

    I examined Secureblue's kernel parameters and turned multiple of them off because some were mitigations for something that was unnecessary. IE: The kernel would make the analysis that your hardware is not affected by a vulnerability, and thus there is no need to enable a specific mitigation. But they would override this and force the mitigation, so you take a performance hit, for what I understand to be, no security gain. Not sure why they did that, a mistake? Or did they simply not trust the kernel's analysis for some reason? Who knows.

  • ...if someone nefarious gets to the point they can read this stuff then they’ll already be able to record your screen, log keystrokes, etc.

    No screenshots -> less data. Less data -> lower breach severity.

    (Unless you have an unusual threat model)

  • The difference would be that RMS is extremely well-versed in computer technology. He understands the problems with non-free software.

    Someone with his knowledge could choose to disregard those issues for convenience, but Stallman is willing to make great sacrifices.

  • If you don’t have privacy from the government, you don’t have privacy.

    Privacy refers to more than just privacy regarding the government.

    Your threat model and situation might mean that if the government knows something, its as bad as if every single person knows it.

    But this isn't for everyone.