Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SI
Posts
1
Comments
162
Joined
2 yr. ago

  • Rarely anything. There can be some newer Bios/chip features that are not supported in the kernel yet, and a few older/quirky machines requires setting correct kernel parameters in the boot phase. But overall, you wouldn't normally do it any different from win, and a laptop from 23 should be supported with all newer kernels.

    I'm sure there are Bios settings that could be changed dependent on operating system. Perhaps some internal timing works best with this and that ram clock, or whatever, but it would be a hazzle to figure out, and there may not be any gain - other than the fun of exploring oc..

  • Nothing wrong with that. Putin is a sharp guy, and only those falling for the 'Dictator' propaganda nonsense can be against this. However, it hardly have anything to do with Musk (which is why we are here).

    For those still believing in the whole "Russia/Putin Baaad" nonsensical propaganda campaign, watch The Duran, Brian Berletic, Awakening Richard, or any of the hundreds of good sources for a non-propagandized world-view. Tl:dr, US (over many presidents) used Ukraine to start a proxy war against Russia/Putin - and were beaten to a pulp trying.. Now, direct conflict with China is of the table, so US will try 'little Iran' next, in the ignorant belief they can win and harm China in some way while doing it. Other proxies and vassals of US are jumping Empirical ship and allies them selves with China and BRICKS (including Russia). Not only that, US' grip and censorship over Western media is lessening, US are still losing/Putin winning, so more and more people will stop believing in the 'Empire of Lies' war mongering schemes.

    Now let's get back to flaming the little **ycho and his garbage corporations/fans..

  • Propaganda and 'color-revolutions' are US biggest weapon. It's hard to break out of an information bubble - even harder if the local elites/media owners have sold out to the Empire.

  • I've heard several voices sense a small 'awakening' (one eye-lit up) in EU fora. Lets hope the sentiment sticks around, and that US aren't just scheming behind the scenes. It kind of looks too good to be true, and I've been fooled before by things i wanted to happen.

  • Guess 'Diversity' in language wasn't important for neither the Anglophone world, or Saltman. Good for Asia, but afaik we still lack descent support for Africa, the middle-east, and a shitload of smaller languages that Western corps didn't bother adding.

  • Not sure..

    1. An AI therapist can already easily handle general good mental advice, such as reducing cognitive load, perspective shifts, alternative methodologies, education of standard mental needs, processes and whatever low-level stuff we can benefit from. 2. hooman therapists are a coin-toss. Most are completely crap and build their business from archaic and/or wrong theories and personal ideology/feelings. 3. whatever flaws AI have now, is going away really really fast.

    Hooman therapists cost a lot of money, and a shitload of people won't get any help at all without AI.

    So, I think it is fine. The potential damage is far less than no help at all. Just use a little common sense and don't take anything as a Gospel - just as when we see hooman therapists.

  • Perhaps one more thing. Life evolved around converting low Entropy energy (chemistry/narrow sun-rays), into high Entropy energy (black body heat), and in the mean time prosper for a while. So if we want to reverse time, we would both need to apply high Entropy energy to each 'broken' component of the early system, so the system can transitions back to a low Entropy state, and incredible luck (to exactly reverse the directional momentum necessary to transition back).

    We can't control/convert/utilize high Entropy energy, and even if we learned far out in the future, it would take even longer to be precise enough to even 'flip' an Atom.

    Oh, btw. The universe goes from low entropy ('Big Bang') towards high entropy, but complexity - where Life can exist - can only happen in the middle, after the prior complexity evolved and before all low entropy energy is dispersed though-out the universe leaving Life with no energy to convert.

    Well, I / We may still be wrong, but use of current AI for Philosophical/Scientific questions, propels our knowledge/intelligence to new heights. I think this will continue for a while. There must be millions of people that already feel they have become much smarter by chatting with an AI.

    I wonder what effect it will have on a global scale civilization when large parts of the lower ranking population suddenly jumps 20-40+ points up the IQ scale ? Screw rich c*nts and old oppressive societal structures..

  • Hmm, the lists include well-known tools like Tor and WireGuard ? I have no doubt about the nefarious nature of NED, but they seem to mainly support tools that allow them to spread their propaganda/regime change garbage. Surely they also support tools they cant manipulate/tap into ?

  • As others say, it can be done. If you want more normal umpf, you'll need to mount parts of the filesystem to your ssd. You can mount /home or / on ssd, or have an overlay file system as a file on an ssd/hdd, or use bcachefs with back propagation to the usb, or similar fancy setups.

    So you'll boot linux kernel from the usb, but most disk activity will be on your ssd. Fun project, but not super easy/practical if it isn't done automatically.

    My old HP microserver is 'made' to boot from a usb-stick inserted on the mb.

    Anyway, perhaps an AI can suggest a script to do what you want ?

  • I've installed Bazzite fedora spin, and are waiting for 6.14 to enable my new Npu.

    Not in a hurry, but does anyone know how long it takes on average for kernel updates to be ready for the spin-off's ? The process seems very automated, so it may not take very long ?

  • Yes, you are leaking data, but don't panik. First of all, your mental health here and now is important - without it you won't have energy for other things. Next, It takes a lot of energy to de-google or de-corp and you don't wan't to 'leak' now, but in 6 months, you'll have your own private/foss talking AI assistant, and it will help you cut the ties to the last corporation then.

    So, soon you'll be more 'invisible' for the corps, and maybe you can live with the spying/manipulation for a moment longer ? Not sure how long it takes for their AI to find you anyway, but at least the removed have to work for it..

    Alternatively, get a free account at Groq (also have 'whisper' stt), or sambanova and install/use open-webui for talking. These new hardware corps don't train AI on free user interactions, and they probably don't sell your information - yet. There are other methods for p2p sharing of AI resources, but they may not provide quality high enough or with all modalities.

  • Same here without vpn. Getting by using embedded yt player, but not optimal. Seems that freetube users could win if we had a common cache like ipfs where watched videos are stored. If one user gets the whole video, everyone have access to it via ipfs. There would still be trouble with rare videos/first view, but YT would probably not block an ip if most of the yt videos where loaded from ipfs instead ?

    Just a quick thought.

  • Didn't know what uBlue was, so here: https://universal-blue.org/

    "The Universal Blue project builds a diverse set of continuously delivered operating system images using bootc. That's nerdspeak for the ultimate Linux client: the reliability of a Chromebook, but with the flexibility and power of a traditional Linux desktop.

    These images represent what's possible when a community focuses on sharing best practices via automation and collaboration. One common language between dev and ops, and it's finally come to the desktop.

    We also provide tools for users to build their own image using our templates and processes, which can be used to ship custom configurations to all of your machines, or finally make the Linux distribution you've long wished for, but never had the tools to create.

    At long last, we've ascended."

  • Permanently Deleted

    Jump
  • Been enjoying Linux for ~25 years, but have never been happy with how it handled low memory situations. Swapping have always killed the system, tho it have improved a little. It's been a while since I've messed with it. I've buckled up and are using more ram now, but afair, you can play with:

    (0. reduce running software, and optimize them for less memory, yada yada)

    1. use a better OOM (out of memory) manager that activates sooner and more gracefully. Search in your OS' repository for it.
    2. use zram as a more intelligent buffer and to remove same (zero) pages. It can lightly compress lesser used memory pages and use a partition backend for storing uncompressible pages. You spend a little cpu, to minimize swap, and when needed, only swap out what can't be compressed.
    3. play with all the sysctl vm settings like swappiness and such, but be aware that there's SO much misinformation out there, so seek the official kernel docs. For instance, you can adapt the system to swap more often, but in much smaller chunks, so you avoid spending 5 minutes to hours regaining control - the system may get 'sluggish', but you have control.
    4. use cgroups to divide you resources, so firefox/chrome (or compilers/memory-hogs) can only use X amount before their memory have to swap out (if they don't adapt to lower mem conditions automatically). That leaves a system for you that can still react to your input (while ff/chrome would freeze). Not perfect, tho.
    5. when gaming, activate a low-system mode, where unnecessary services etc are disabled. I think there's a library/command that helps with that (and raise priority etc), but forgot its name.

    EDIT: 6. when NOT gaming, add some of your vram as swap space. Its much faster than your ssd. Search github or your repository for 'vram cache' or something like that. It works via opencl, so everyone with dedicated vram can use it as super fast cache. Perhaps others can remember the name/link ?

    Something like that anyway, others will know more about each point.

    Also, perhaps ask an AI to create a small interface for you to fiddle with vm settings and cgroups in an automated/permanent way ? just a quick thought. Good luck.

  • Agree. I also shift between them. As the bare minimum, I use a thinking model to 'open up' the conversation, and then often continue with a normal model, but it certainly depends on the topic.

    Long ago we got 'routellm' I think, that routed a request depended on its content, but the concept never got traction for some reason. Now it seems that closedai and other big names are putting some attention to it. Great to see DeepHermes and other open players be in front of the pack.

    I don't think it will take long before we have the agentic framework do the activation of different 'modes' of thinking dependent on content/context, goals etc. It would be great if a model can be triggered into several modes in a standard way.

  • You can argue that a 4090 is more of a 'flagship' model on the consumer market, but it could be just a typing error, and then you miss the point and the knowledge you could have learned:

    "Their system, FlightVGM, recorded a 30 per cent performance boost and had an energy efficiency that was 4½ times greater than Nvidia’s flagship RTX 3090 GPU – all while running on the widely available V80 FPGA chip from Advanced Micro Devices (AMD), another leading US semiconductor firm."

    So they have found a way to use a 'off-the-shelf' FPGA and are using it for video inference, and to me it looks like it could match a 4090(?), but who cares. With this upgrade, these standard Fpga's are cheaper(running 24/7)/better than any consumer Nvidia GPU up to at least 3090/4090.

    And here from the paper:

    "[problem] ..sparse VGMs [video generating models] cannot fully exploit the effective throughput (i.e., TOPS) of GPUs. FPGAs are good candidates for accelerating sparse deep learning models. However, existing FPGA accelerators still face low throughput ( < 2TOPS) on VGMs due to the significant gap in peak computing performance (PCP) with GPUs ( > 21× ).

    [solution] ..we propose FlightVGM, the first FPGA accelerator for efficient VGM inference with activation sparsification and hybrid precision. [..] Implemented on the AMD V80 FPGA, FlightVGM surpasses NVIDIA 3090 GPU by 1.30× in performance and 4.49× in energy efficiency on various sparse VGM workloads."

    You'll have to look up what that means yourself, but expect a throng of bitcrap miner cards to be converted to VLM accelerators, and maybe give new life for older/smaller/cheaper fpga's ?