Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MG
Posts
0
Comments
420
Joined
2 yr. ago

  • I asked ChatGPT to help me understand this. It’s a little better.

    • The impending reign of GEOTUS prioritizes dictatorial vengeance over political agendas, driven by a rejection of reasoned discourse in favor of reactionary impulses, as explained in Kahneman's System-1.
    • This societal shift stems from deep-rooted interpretations favoring male-centric ideologies, leading to Trump's inevitable rise due to systemic failures in education, parenting, and institutional oversight.
    • Projections include the erosion of legal frameworks, the rise of authoritarianism disguised as lawful governance, and the dissolution of international alliances.
    • Environmental degradation will be exploited for validation, exacerbating global crises.
    • Despite impending threats, Israel remains entrenched in ideological beliefs, unaware of looming dangers.
    • Humanity stands at a pivotal moment, grappling with unconscious ignorance and the allure of reactionary violence, facing the stark reality of self-destruction.
    • Scientific predictions underscore the importance of objective observation, highlighting the critical choice between embracing growth and succumbing to ignorance.
  • Disclaimer: I don’t know anything about the dependencies of Flameshot. Perhaps this is related that in Cosmic they are moving away from X to Wayland?

    In any case. I’ve tried flameshot so many times and I always come to the same conclusion. It has too much stuff; it tries to do so many things that I just get confused when all I want to do is take a quick window capture or a selection area capture. I can do post processing if needed in some other app. Gnome’s screenshot app is perfect; and Pop’s implementation that they’re showing here looks very similar.

    But I get it that some folks might want those extra features. Cool, it’s just not for me.

  • Let’s amend the publishing standards to include a {} trailing every sentence, indicating which author’s contribution it was.{Targz} If that sentence is referenced, then the curly brace will indicate who decided to read and quote the paper; the original author of that paper should still be cited appropriately to the publishing standard (IEEE, etc.){Targz} While this may seem cumbersome, it sufficiently addresses the notion that “Authorship order makes for fraught debates in academia”[1]{Targz} by making clear whose contribution is whom’s. {Targz}

    Anything less would be uncivilized.[2]{Targz}

    1. Demaine, Erik & Demaine, Martin. (2023). Every Author as First Author.
    2. Barkley, Charles. (1994). Right Guard. https://m.youtube.com/watch?v=eXce26k-leU
  • I won a free LASIK in a contest from the local newspaper. The surgeon’s practice office manager tried to claim that the prize was a free single eye (effectively buy one get one free) but the way the contest prize statement was phrased made it very clear that it was both. We mutually decided to cash-value the prize at 0.75 because I decided there’s no way I want a pissed-off surgeon pointing a laser at my eyes. In exchange, I signed a 5 year NDA about the whole thing—that was about 7 years ago.

    Used the cash for a down payment on an awesome car, instead. That’s my story on how I got my Tesla. The company’s frontman is a massive douche but the car is freaking awesome.

  • ONNX Runtime is actually decently well optimized to run on CPUs; even with large models. However, the simple truth is that there’s really no escaping that Billion+parameter models need to be quantized and even pruned heavily to fit in memory and not saturate the CPU cache so inferences/generations don’t take forever. That’s a reduction in accuracy, so the quality of the generations aren’t great.

    There is a lot of really interesting research and development being done right now on smart quantization and pruning. Model serving technologies are improving rapidly too—paged attention is a really cool technique (for transformer based models) for effectively leveraging tensor core hardware—I don’t think that’s supported on CPU yet but it’s probably not that far off.

    It’s a really active field and there’s just as much interest in running huge models on huge hardware as there is big models on small hardware. I recently heard of layerwise inference for CPUs; load each layer of the network to the CPU cache on demand. That’s typically a bottleneck operation on GPUs but CPU memoery so bloody fast that it might actually work fine. I haven’t played with it myself, or read the paper all that deeply so I can’t really comment more than it’s an interesting idea.

  • Funny. I never even thought of using thefuck in any remotely production-esque kind of way. Only for its intended use case: to save me a few keystrokes retyping some command I fucked up typing.