Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
Posts
21
Comments
1,987
Joined
1 yr. ago

  • Been trying to play with this in ik_llama.cpp, and it's a temperamental model. It feels deep fried, like it wants to be smart if it would just stop looping or getting its own think template wrong.

    It works great in 24GB VRAM though. I'm getting like 16 tok/sec at longish context, with 15 experts on the GPU and the rest offloaded.

  • I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl.

    Then don't offload! Since its 3000 series, you can run an exl3 with a really tight quant.

    For instance, Mistral 24B will fit in 12GB with no offloading at 3bpw, somewhere in the quality ballpark of an Q4 GGUF: https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/tfIK6GfNdH1830vwfX6o7.png

    It's especially good for long context, since exllama's KV cache quantization is so good.

    You can still use kobold.cpp, but you'll have to host it via an external endpoint like TabbyAPI. Or you can use croco.cpp (a fork of kobold.cpp) with your own ik_llama.cpp trellis-quantized GGUF (though you'll have to make that yourself since they aren't common... it's complicated, heh).

    Point being that simply having an ampere (3000 series RTX) card can increase efficiency massively over a baseline GGUF.

  • Coffee Stain's another good example on the bigger end.

    It does seem like there's a danger zone behind a certain size threshold. It makes me worry for Warhorse (the KCD2 dev), which plans to expand beyond 250.

  • Funny thing is Teslas already have something more sophisticated. They could pipe the FSD's diagnostics to a HUD as a more polished, standard 'overlay' for the driver, literally run with the car's own hardware. You'd think Tesla execs would know about that since it's literally their business, and predates the LLM craze.

    ...But no.

  • This is so stupid.

    To me, "AI" in a car would be like highlighting pedestrians in a HUD, or alerting you if an unknown person messes with the car, or maybe adjusting mood lighting based on context. Or safety features.

    ...Not a chatbot.

    I'm more "pro" (locally hostable, task specific) machine learning than like 99% of Lemmy, but I find the corporate obsession with cloud instruct textbots bizarre. It would be like every food corp living and breathing succulents. Cacti are neat, but they don't need to be strapped to every chip bag, every takeout, every pack of forks.

  • I feel like there’s a “bell curve” for Linux gaming enjoyment.

    If you’re even a little techy, like not using your PC begrudgingly and mostly live in iOS or whatever, the switch will feel like a relief. But many PC users aren’t; they arent interested in what a OS or file system is, they just want League or Sims to pop up and that’s it.

    …And then there’s me. I use Linux for hours every day, I’m pretty familiar with the graphics stacks and such… But I need the performance of stripped, neutered Windows I dual boot for weird, modded sim games I sometimes play. And frankly, it’s more convenient for many titles I need to get up and running quick for coop or whatever. There’s also tools like SpecialK that don’t work on Linux, and help immensely with certain games/displays.

  • Not everyone's a big kb/mouse fan. My sister refuses to use one on the HTPC.

    Hence I think that was its non-insignificant niche; couch usage. Portable keyboards are really awkward and clunky on laps, and the steam controller is way better and more ergonomic than an integrated trackpad.

    Personally I think it was a smart business decision, because of this:

    It doesnt have 2 joysticks so I just buy an Xbox one instead.

    No one's going to buy a steam-branded Xbox controller, but making it different does. And I think what killed it is that it wasn't plug-and-play enough, eg it didn't work out of the box with many games.

  • With respect, this doesn't make any sense. If you want a joystick controller, just buy an Xbox controller that everything's compatible with anyway?

    The trackpads shine when one needs to emulate a mouse/kb in non-controller games; a nightmare with joysticks.

  • My sister still has a working one that she treats like a religious artifact, as it's the best way to play mouse/KB games from the sofa.

    I see why they discontinued them though. They need custom configs for most games, and I think most people don't like that much tweaking.

  • The laptop APUs were better than Intel, with lower power, and far better graphics!

    It got even more dramatic in the 4000 series. Renior is one of the best chips AMD ever made.

    TBH the press just had a hard time comprehending it back then, and it was lest dramatic on desktop (or tanky desktop-sized laptops with dGPUs) because Intel's chips could clock higher for single-threaded workloads (at the expense of mad power usage).

  • A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)

    With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file

    The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.


    That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.


    …But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).

  • DeepSeek, now that is a filtered LLM.

    The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.

    There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to "improve its risk profile":

    https://huggingface.co/microsoft/MAI-DS-R1

    https://huggingface.co/perplexity-ai/r1-1776

    That's the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.


    Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

    Instruct LLMs aren't trained on raw data.

    It wouldn't be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked "anti woke" data to placate Musk real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.


    ...Not that I don't agree with you in principle. Twitter is a terrible source for data, heh.

  • Nitpick: it was never 'filtered'

    LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is 'biased'. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.

    They trained it to praise hitler, intentionally. They didn't remove any guardrails. Not that Musk acolytes would know any different.

  • Traning data is curated and continous.

    In other words, one (for example, Musk) can finetune the big language model on a small pattern of data (for example, antisemetic content) to 'steer' the LLM's outputs towards that.

    You could bias it towards fluffy bunny discussions, then turn around and send it the other direction.

    Each round of finetuning does "lobotomize" the model to some extent though, making it forget stuff, overuses common phrases, reducing its ability to generalize, 'erasing' careful anti-reptition tuning and stuff like that. In other words, if Elon is telling his engineers "I don't like these responses. Make the AI less woke, right now," he's basically sabotaging their work. They'd have to start over with the pretrain and sprinkle that data into months(?) of retraining to keep it from dumbing down or going off the rails.

    There are ways around this outlined in research papers (and some open source projects), but Big Tech is kinda dumb and 'lazy' since they're so flush with cash, so they don't use them. Shrug.

  • No, you misunderstand; it’s a sound scheme. I wouldn’t be against it.

    …Which just underscores how horrific of a situation we are in. It’s an akin to “okay, a meteor is coming; what about this plan to deflect it into the arctic?”

    Fossil fuel companies are lobbying for the “everything is fine” propaganda, not geoengineering schemes that indirectly reinforce how dangerously unstable the planet could be.

  • There is a nugget of 'truth' here:

    https://csl.noaa.gov/news/2023/390_1107.html

    I can't find my good source on tis, but there are very real proposals to seed the arctic or antarctic with aerosols to stem a runaway greenhouse gas effect.

    It's horrific. It would basically rain down sulfiric acid onto the terrain; even worse than it sounds. But it would only cost billions, not trillions of other geoengineering schemes I've scene.

    ...And the worst part is it's arctic/climate researchers proposing this. They intimately know exactly how awful it would be, which shows how desperate they are to even publish such a thing.

    But I can totally understand how a layman (maybe vaguley familiar with chemtrail conspiracies) would come across this and be appalled, and how conservative influencers pounce on it cause they can't help themselves.

    Thanks to people like MTG, geoengineering efforts will never even be considered. :(


    TL;DR Scientists really are proposing truly horrific geoengineering schemes "injecting chemicals into the atmosphere" out of airplanes. But it's because of how desperate they are to head off something apocalyptic, and it's not even close to being implemented. They're just theories and plans.