Skip Navigation

Posts
2
Comments
1,623
Joined
2 yr. ago

  • Isn't owning the domain proof enough already?

    Nobody else could possibly use max-p.me as their handle, and proving control of the domain is plenty for security sensitive things like LetsEncrypt.

    Anyone you'd care to mark verified already brought their own domain.

  • Depends on your goal: do you want to preserve what you can at its best, or do you want to ensure you have plenty of entertainment to go by?

    I'd probably go with the lower quality. We watched TV in 480i and under for decades, and 720p is still quite watchable even today. In HEVC or AV1 you can really pack a decent collection.

  • Server side rendering has become popular again with frameworks like Svelte and Next.js. For a while basic React was popular and entirely on the client side, and thus dependent on JS to work at all.

  • You can't really easily locate where the last version of the file is located on an append-only media without writing the index in a footer somewhere, and even then if you're trying to pull an older version you'd still need to traverse the whole media.

    That said, you use ZFS, so you can literally just zfs send it. ZFS will already know everything that needs to be known, so it'll be a perfect incremental. But you'd definitely need to restore the entire dataset to pull anything out of it, reapply every incremental one by one, and if just one is unreadable the whole pool is unrecoverable, but so would the tar incrementals. But it'll be as perfect and efficient as possible, as ZFS knows the exact change set it needs to bundle up. It's unidirectional, so that's why you can just zfs send into a file and burn it to a CD.

    Since ZFS can easily tell you the difference between two snapshots, it also wouldn't be too hard to make a Python script that writes the full new version of changed files and catalogs what file and what version is on which disc, for a more random access pattern.

    But really for Blurays I think I'd just do it the old fashioned way and classify it to fit on a disc and label it with what's on it, and if I update it make a v2 of it on the next disc.

  • I do the same but also have a few trap addresses nobody sane should see or email to, but is easy for scrapers to grab. Easy way to train the spam filter.

  • Both use Linux under the hood. You can even install LineageOS on some TVs.

    The only reason AndroidTV is bullshit is the manufacturers because casual users want shit like Netflix and Prime preinstalled. Google TV in particular comes with a lot of crap and the ads, which believe it or not some users take as a feature.

    But that's not inherent to Android TV as an OS, it's exactly like Android phones and manufacturers preloading a bunch of crap to make an extra buck. If your run AOSP you get none of that crap, and it's fully open-source.

  • Even on Reddit there's been undeleters and archives since basically forever.

    Even for regular websites, there's always the Internet Archive.

    It's hardly a new problem: you always assumed the Internet was forever.

  • I would distrust my carrier well before I distrust the encryption. Even when roaming, your Internet is tunnelled through your carrier using an internal VPN. It even works in China, that's a fairly common way to get around their firewall.

  • A system where everything is sandboxed by default exists too, you do that with a rule that denies everything that's not explicitly labeled for being allowed.

    Only your package manager knows, at install time, how to label an application such that it only have access to the stuff it needs access to. That information have to come from somewhere.

    Security is inherently a compromise. You've always been able to mount /home noexec so users can't execute non-approved stuff. But then Steam doesn't work, none of the games work because that makes them all foreign executables you're not allowed to execute. Steam has Flatpak-specific code to make it work with the nested sandbox.

    It's up to the distros to make those choices for the desired user experience. Most regular distros are a bit old fashioned and leaves a lot of freedom to the user. And you can have a distro for workstations at offices where you really cannot run anything but company software, for security. Or a distro like Steam OS so regular users can't really break it.

  • Mainly because it's not the kernel's job. It provides abstractions to access the hardware, manages memory and manages processes, it doesn't care what userspace does that's userspace's problem.

    The kernel is responsible for enforcing security policies, but not for writing them or discovering them. It doesn't know what an "app" is, or what a permission would look like.

    It's the userspace that assigns labels to files and SELinux policies so that the kernel is programmed to know what the boundaries are. As an example, when you log in on a Linux computer, logind ends up assigning your user access to the keyboard, mouse, display and audio to your user and then starts your session and that's how you get access to those /dev nodes. If you switch user they're yanked away from you so the other user can use them without you snooping on it.

    Userspace uses the kernel's features to implement the permission systems. That's basically what Flatpak does: leverage those kernel features to sandbox the application. And it works great and is effective.

    Android also uses the Linux kernel and its features for its own sandbox and permission system too.

    Generally, the kernel provides the tools for userspace to be able to do things, that's its purpose. For example all the OpenGL and Vulkan stuff is in userspace, not the kernel, the kernel doesn't know what Vulkan is and doesn't care. It mediates access to the GPU and reserving memory on it anf uploading code to it. The code comes from your GPU driver in userspace.

  • That's also effectively what Flatpak and Snap uses, and also Steam's Runtime also uses containers.

  • The bible.

    It's unfortunately been distorted a lot with translations over time, but it was originally a story about morales in a world of greed.

    If Jesus came back he would be crucified again for being too "woke".

  • I also wanted to put an emphasis on how working with virtual disks is very much the same as real ones. Same well known utilities to copy partitions work perfectly fine. Same cgdisk/parted and dd dance as you otherwise would.

    Technically if you install the arch-install-scripts package on your host, you can even install ArchLinux into a VM exactly as if you were in archiso with the comfort of your desktop environment and browser. Straight up pacstrap it directly into the virtual disk.

    Even crazier is, NBD (Network Block Device) is generic so it's not even limited to disk images. You can forward a whole ass drive from another computer over WiFi and do what you need on it, even pass it to a VM boot it up.

    With enough fuckery you could even wrap the partition in a fake partition table and boot the VM off the actual partition and make it bootable by both the host and the VM at the same time.

  • Yeah, that's enough to not have it exposed directly. I understand why they did it that way but very good to know, thanks!

  • What you're trying to do is called a P2V (Physical to Virtual). You want to directly copy the partition as going through a file share via Linux will definitely strip some metadata Windows wants on those files.

    First, make a disk image that's big enough to hold the whole partition and 1-2 GB extra for the ESP:

     
        
    qemu-img create -f qcow2 YourDiskImageName.qcow2 300G
    
      

    Then you can make the image behave like a real disk using qemu-nbd:

     
        
    sudo modprobe nbd
    sudo qemu-nbd -c /dev/nbd0 YourDiskImageName.qcow2
    
      

    At this point, the disk image behaves like any other disk at /dev/nbd0.

    From there create a partition table, you can use cgdisk or parted or even the GUI GParted will work on it.

    And finally, copy the partition over with dd:

     
        
    sudo dd if=/dev/sdb3 of=/dev/nbd0p2 bs=4M status=progress
    
      

    You can also copy the ESP/boot partition as well so the bootloader works.

    Finally once you're done with the disk image, unload it:

     
        
    sudo qemu-nbd -d /dev/nbd0
    
      
  • I keep hearing claims that it's not secure enough to be exposed on the Internet, but I can't seem to find anything about unauthenticated vulnerabilities. It's got a fair amount of CVEs but they all seem to affect when you're an already authenticated user, mainly to XSS an admin as a regular user or the likes.

    It's written in C#, and publicly all you can do is pretty much attempt to log in, this feels like it should be pretty sane compared to some other PHP crap I run.

    Do you have any examples of previous exploits or anything else to be concerned about?

  • I think it counts. You always have the option of taking your data with you and go elsewhere which is one of the main points of self-hosting, being in control of your data. If they jack up the prices or whatever, you just pack up, you never have to pay or else.

    Also hosting an email server at home would be an absolute nightmare, took me 10+ years to get that IP rep and I'm holding on to it as long as I can.

    I have a mix of it: private services run at home, public ones run on a bare metal server I rent. I still get the full benefits of having my own NextCloud and all. Ultimately even at home, I'd still be renting an Internet connection, unless you have a local only server.

  • The language itself has gotten a bit better. It's not amazing but it's decent for a scripting language, and very fast compared to most scripting languages. TypeScript can also really help a lot there, it's pretty good.

    It's mostly the web APIs and the ecosystem that's kinda meh, mostly due to its history.

    But what you dislike has nothing to do with JavaScript but just big corpo having way too many developers iterating way too fast and creating a bloated mess of a project with a million third-party dependencies from npm. I'm not even making this up, I've legit seen a 10MB unit test file make it into the production bundle in a real product I consulted on.

    You don't have to use React or Svelte or any of the modern bloated stuff nor any of the common libraries. You can write plain HTML and CSS and a sprinkle of JavaScript and get really good results. It's just seen as "bad practice" because it doesn't "webscale", but if you're a single developer it's perfectly adequate. And the reality is short of WebAssembly, you're stuck with JS anyway, and WASM is its own can of worms.

    And even then, React isn't that bad. There's just one hell of a lot of very poorly written React apps, in big part because it will let you get away with it. It's full of footguns loaded with blanks, but it's really not aweful if you understand how it works under the hood and write good code. Some people are just lazy and import something and you literally load the same data in 5 different spots, twice if you have strict mode enabled. I've written apps that load instantly and respond instantly even on a low end phone, because I took the time to test it, identify the bottlenecks and optimize them all. JavaScript can be stupid fast if you design your app well. If you're into the suckless philosophy, you can definitely make a suckless webapp.

    What you hate is true for most commercial software written in just about any language, be it C, C++, Java, C#. Bugs and faster response times don't generate revenue, new features and special one-off event features generate much much more revenue, so minor bugs are never addressed for the most part. And of course all those features end up effectively being the 90% bloat you never use but still have to load as part of the app.

  • Permanently Deleted

    Jump
  • It's literally been working just fine for like a decade? Even for NVIDIA users that's kind of a stretch.

    Maybe if you share more details about your issues and your setup we can help fix it.