Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)IR
Posts
0
Comments
323
Joined
2 yr. ago

  • To be fair, it is useful for other purposes, but the cost to users is likely to be huge, with ad blocking being one of them. It probably also prevents other things even outside your browser because there's no point in securing a browser running in an untrusted environment. IIRC there is/was an issue running Netflix on certain Android devices and rooted devices after a similar feature was added to Android.

  • Actually, they are controlling your graphics driver. If you're using a custom driver you'll fail attestation because you have untrusted code in your kernel and/or browser process. I expect this will also fail if you're using an old driver with known vulnerabilities that allow you to use your own device in unexpected ways.

  • I can't blame free software for not working correctly in Apple's special browser that only exists on their platform. Years ago they were really focused on having a good and standards compliant browser but it's started to become the new Internet Explorer for iPhone. At least on Mac computers you can install a different browser.

  • It may not be that sinister. Most companies intend for their "smart" products to be just smart enough for the average user and don't spend the money to support local control or anything else. I have some "smart" lamps that are worse than regular lamps unless you flash them with custom firmware. Some newer models of these lamps are more "secure" and can't be reflashed into something useful (seriously, who wants to talk to Alexa every time they want to turn on a lamp?).

  • I was hoping for "easy to hack" as in it's your device and you can use it how you want for as long as you want. This probably means the opposite in most cases. I guess it's still helpful for labeling products to be suspicious of.

  • I was looking for something better than sharing pictures four at a time on Mastodon and checked out Pixelfed. I don't understand it at all. It's like the system is designed around sharing individual pictures. Even when an instance allows sharing a few pictures at once, other users will only see the first picture because multiple pictures is obviously an afterthought for the UI. Maybe it's because I am too old and I want Flickr instead of Instagram. I'm still posting pictures up to four at a time on Mastodon.

  • You can create multi-platform images (actually manifests of single-platform images) without buildx, and buildx isn't enough to create multiplatform images. In its default configuration, buildx can usually build images for different processor architectures but requires CPU emulation to do it. If the Dockerfile compiles code, it runs a compiler under emulation instead of cross compiling. To do it without CPU emulation involves configuring builders running on the other platforms and connecting to them over the network.

    I don't know if it supports building images for multiple operating systems, but it probably doesn't matter. I've only ever seen container images for Linux and Windows, and it's virtually impossible to write a single Dockerfile that works for both of those and produces a useful image. The multiplatform images that support Linux and Windows are probably always created using the manifest manipulation commands instead of buildx.

  • In a weird way, for most people Docker is just used to compensate for problems that Windows used to have but doesn't really anymore. Often on Linux when you install something it gets dumped into a shared prefix like /usr or /usr/local or has dependencies on libraries that are installed into /lib or /usr/lib or /usr/local/lib. If the libraries are versioned correctly, it's usually not a big problem that the applications are sharing components, but sometimes shared files conflict with each other and you end up with something similar to the old Windows DLL hell, especially if applications are not officially packaged for the distro you're running. Using a container image avoids this because only the correct libraries and support files are in the image, and they're in a separate location so they can easily be swapped without impacting other applications that might be using similar files.

    However, on Windows these days it's highly discouraged for programs to install things into common directories like that. Usually when you install an application it installs everything it needs into its own directory. For the things that Microsoft puts into shared directories, there's a system called SXS that's supposed to prevent conflicts with incompatible versions. It's not perfect because there are still cases where you can get interactions, but it's pretty uncommon now.

  • Docker is not platform independent. The OS is not included in the image, and the executables in the image rely on the host system having a compatible kernel and system architecture. Only userspace components like software libraries and, unfortunately, CA certificates are included in the image.

    It primarily supports x86_64 Linux systems. If you want to run on ARM, you need special images or you need a CPU emulator. If you want to run on Mac OS or Windows, you usually need a VM. There are Windows Docker containers, but Docker as a technology isn't really applicable to Windows because of the dirty separation between userspace and the kernel (if you've ever tried to run Docker on Windows Server without Hyper-V support, this is why it's so difficult to get it to work and it stops working after Windows updates).