Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ME
Posts
10
Comments
586
Joined
11 mo. ago

  • if you want em headless, you might wanna try dietpi (https://dietpi.com/). It's a debian base with an active developer maintaining a bunch of very useful utilities designed to make managing pis and SBCs as simple as possible. Better yet, it's designed to be extremely minimal on system resources, and strips out stuff like the display server unless you choose to install it.

    Ive had it running on a homeserver on a pi4b for years now and it's rock solid.

  • Maybe for hosting a blog or something but I wouldn't self host anything more important than that. Even then, Github pages support custom domains on their free tier so you don't even need to do the hosting in that scenario

  • Lots of frameworks for applications and games have automatic translation of file paths to sensible directories, but when you're writing software you're probably doing shit fast and dirty until it's ready for release, by that time you now have a bunch of people relying on your software so changing the file structure will cause loads of issues.

  • For commercial stuff, 50Gbps is probably useful. Especially if you're not large enough to commission your own fibre cables but for the average person it's probably not too useful - at those speeds you're transferring fast enough to saturate even the fastest of commercial device storage.

  • I looked up the stats and yeah it's more like A55 vs A72 (pi 4b) but to reiterate my point of compatibility and potential performance over the next few years:

    https://www.tomshardware.com/pc-components/cpus/risc-v-cpu-runs-the-witcher-3-at-15-fps-64-core-chip-paired-with-radeon-rx-5500-xt-gpu-deliver-laggy-gameplay

    15fps in witcher 3 is wild for an architecture that is running through a compatibility layer and is incredibly immature. I'd also note that I'm not sure how much overhead box64 has, it's not emulation the same way WINE is not an Emulator, which as we know allows it to be as fast as native Windows sometimes.

  • Thanks to box64, a lot of software can actually run on RISCV when using Linux, but the performance is just about pushing Raspberry Pi 4 levels at best.

    But also, if you have source code for some software available in ARM/X64 you can usually just compile it for yourself - A lot of compilers already support RISCV, but obviously distros won't bother maintaining apps in lesser used architectures

  • There are other costs, too. Someone has to spend a LOT of time maintaining their repos: testing and reviewing each package, responding to bugs caused by each packaging format's choice of dependencies, and doing this for multiple branches of supported distro version! Thats a lot of man hours that could still be used for app distribution, but combined could help make even more robust and secure applications than before.

    And, if we're honest, except for a few outliers like Nix, Gentoo, and a few others, there's little functional difference to each package format, which simply came to exist to fill the same need before Linux was big enough to establish a "standard".


    Aaaanyway

    I do think we could have package formats leveraging torrenting more though. It could make updates a bit harder to distribute quickly in theory but nothing fundamentally out of the realm of possibilities. Many distros even use torrents as their primary form of ISO distribution.

  • the sandbox is the point! but yes there's still shortcomings with the sandbox/portal implementation, but if snaps can find a way to improve the end user experience despite containerising (most) apps, then so can flatpak.

    It's similar to how we're at that awkward cusp of Wayland being the one and only display protocol for Linux, but we're still living with the awkward pitfalls and caveats that come with managing such a wide-ranging FOSS project.

  • Linux works pretty well on most Macbooks to date. Granted it's probably slower and guaranteed than most modern laptops but those custom drivers are usually working on Linux not too long after launch.

  • Asahi Linux has done great work for compatibility for recent apple hardware. They've even gotten some decent Vulkan performance from the GPU despite reverse engineering it from scratch - Apple meanwhile refuses to implement Vulkan in favour of Metal, which makes MacOS less compatible with graphically intensive apps than Linux.

    All praise the queen, Asahi Lina!

  • Ive seen more than a few Linux fans using apple hardware because it's usually quite sleek, even if its antithetical to a lot of other things linux users tend to like such as repairability.

  • I know there's limitations to flatpak and other agnostic app bundling systems but there's simply far too many resources invested into repacking the same applications across each distro. These costs wouldnt be so bad if more resources were pooled behind a single repository and build system.

    As for using flatpaks at the core of a distro, we know from snaps that it is possible to distribute core OS components/updates via a containerised package format. As far as I know there is no fundamental design flaw that makes flatpak incapable of doing so, rather than the fact it lacks the will of distro maintainers to develop the features in flatpak necessary to support it.

    That being said, it's far from my point. Even if Alpine, Fedora, Ubuntu, SUSE etc. all used their native package formats for core OS features and utilities, they could all stand to save a LOT in the costs of maintaining superfluous (and often buggy and outdated) software by deferring to flatpak where possible.

    There needs to be a final push to flatpak adoption the same way we hovered between wayland and xorg for years until we decided that Wayland was most beneficial to the future of Linux. Of course, this meant addressing the flaws of the project, and fixing a LOT of broken functionality, but we're not closer than ever to dropping xorg.