Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BR
Posts
21
Comments
1,987
Joined
1 yr. ago

  • It depends!

    Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip. Or get lucky with docker.

    Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.

    The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/. They don't want money I guess.

    And there are... quirks, depending on the model.


    I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don't offer a lot of VRAM for the $ either.


    NPUs are mostly a nothingburger so far, only good for tiny models.


    Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.


    A lot of people do offload MoE models to Threadripper or EPYC CPUs, via ik_llama.cpp, transformers or some Chinese frameworks. That's the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090 and put more of the money in the CPU platform.


    You wont find a good comparison because it literally changes by the minute. AMD updates ROCM? Better! Oh, but something broke in llama.cpp! Now its fixed an optimized 4 days later! Oh, architecture change, not it doesn't work again. And look, exl3 support!

    You can literally bench it in a day and have the results be obsolete the next, pretty often.

  • Depends. You're in luck, as someone made a DWQ (which is the most optimal way to run it on Macs, and should work in LM Studio): https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit-DWQ/tree/main

    It's chonky though. The weights alone are like 40GB, so assume 50GB of VRAM allocation for some context. I'm not sure what Macs that equates to... 96GB? Can the 64GB can allocate enough?

    Otherwise, the requirement is basically a 5090. You can stuff it into 32GB as an exl3.

    Note that it is going to be slow on Macs, being a dense 72B model.

  • One last thing: I've heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do.

    For instance, Kimi 72B is quite a good coding model: https://huggingface.co/moonshotai/Kimi-Dev-72B

    It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit in TabbyAPI as an exl3: https://huggingface.co/ArtusDev/moonshotai_Kimi-Dev-72B-EXL3/tree/4.25bpw_H6

    As another example, I personally use Nvidia Nemotron models for STEM stuff (other than coding). They rock at that, specifically, and are weaker elsewhere.

  • Qwen3-235B-A22B-FP8

    Good! An MoE.

    Ideally its maxium context lenght of 131K but i’m willing to compromise.

    I can tell you from experience all Qwen models are terrible past 32K. What's more, going over 32K, you have to run them in a special "mode" (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.

    Also, you lose a lot of quality with FP8/AWQ quantization unless it's native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much higher quality, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp's is good, and exllama's is excellent, making it less than ideal for >16K. Its niche is more highly parallel, low context size serving.

    My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090

    Honestly, you should be set now. I can get 16+ t/s with high context Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)

    It is poorly documented through. The general strategy is to keep the "core" of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There's even a project to try and calculate it automatically:

    https://github.com/k-koehler/gguf-tensor-overrider

    IK_llama.cpp can also use special GGUFs regular llama.cpp can't take, for faster inference in less space. I'm not sure if one for 235B is floating around huggingface, I will check.


    Side note: I hope you can see why I asked. The web of engine strengths/quirks is extremely complicated, heh, and the answer could be totally different for different models.

  • Tabby supports tool usage. It's all just prompting to the underlying LLM, so you can get some frontend to hit the API and do whatever is needed, but I think it does have some kind of native prompt wrapper too.

    It is confusing because there are 2 TabbyAPI formats now: exl2 (optimal around 4-5bpw), older and more mature (but now unsupported), and exl3, optimal down to ~3bpw (and usable even below), but slower on some GPUs.

  • Be specific!

    • What models size (or model) are you looking to host?
    • At what context length?
    • What kind of speed (token/s) do you need?
    • Is it just for you, or many people? How many? In other words should the serving be parallel?

    In other words, it depends, but the sweetpsot option for a self hosted rig, OP, is probably:

    • One 5090 or A6000 ADA GPU. Or maybe 2x 3090s/4090s, underclocked.
    • A cost-effective EPYC CPU/Mobo
    • At least 256 GB DDR5

    Now run ik_llama.cpp, and you can serve Deepseek 671B faster than you can read without burning your house down with H200s: https://github.com/ikawrakow/ik_llama.cpp

    It will also do for dots.llm, kimi, pretty much any of the mega MoEs de joure.

    But there's all sorts of niches. In a nutshell, don't think "How much do I need for AI?" But "What is my target use case, what model is good for that, and what's the best runtime for it?" Then build your rig around that.

  • They are a scam.

    But:

    • Using gaming GPUs in datacenters is in breach of Nvidia's license for the GPU.

    ...So, yes, they are a scam. But:

    • Datacenter is all about batched LLM performance, as the vram pools are bigger than models. In reality, one can get better parallel token/s on an H100 than you can on 2x RTX Pros or a few 5090s, especially with bigger models that take advantage of NVLink.
  • What model size/family? What GPU? What context length? There are many different backends with different strengths, but I can tell you the optimal way to run it and the quantization you should run with a bit more specificity, heh.

  • Kobold.cpp is fantastic. Sometimes there are more optimal ways to squeeze models into VRAM (depends on the model/hardware), but TBH I have no complaints.

    I would recommend croco.cpp, a drop-in fork: https://github.com/Nexesenex/croco.cpp

    It has support for more the advanced quantization schemes of ik_llama.cpp. Specifically, you can get really fast performance offloading MoEs, and you can also use much higher quality quantizations, with even ~3.2bpw being relatively low loss. You'd have to make the quants yourself, but it's quite doable... just poorly documented, heh.

    The other warning I'd have is that some of it's default sampling presets are fdfunky, if only because they're from the old days of Pygmalion 6B and Llama 1/2. Newer models like much, much lower temperature and rep penalty.

  • It's kinda a hundred little things all pointing in a bad direction:

    https://old.reddit.com/r/LocalLLaMA/comments/1kg20mu/so_why_are_we_shing_on_ollama_again/

    https://old.reddit.com/r/LocalLLaMA/comments/1ko1iob/ollama_violating_llamacpp_license_for_over_a_year/

    https://old.reddit.com/r/LocalLLaMA/comments/1i8ifxd/ollama_is_confusing_people_by_pretending_that_the/

    I would summarize it as "AI Bro" like behavior:

    • Signs in the code they are preparing a commercial version of Ollama, likely dumping the free version as a bait and switch.
    • Heavy online marketing.
    • "Reinventing"the wheel" to shut out competition, even when base llama.cpp already has it implemented, like with modelfiles and the ollama API.
    • A lot of inexplicable forked behavior.

    Beyond that:

    • Misnaming models for hype reasons, like the tiny deepseek distils as "Deepseek"
    • Technical screw ups with the backend, chat templates and such hidden from users, so there's no apparent reason why models are misbehaving.
    • Not actually contributing to the core development of the engine.
    • Social media scummery.
    • Treating the user as 'dumb' by hiding things like the default hard 2048-token context window.
    • Not keeping up with technical innovations, like newer quantizations, SWA, batching, other backend stuff.
    • Bad default quantizations, even beyond the above. For instance, no Google QATs (last I checked), no imatrix, no dynamic quants.

    I could go on forever about more specific dramas, and I don't even remember the half of them. But there are plenty of technical and moral reasons to stay away.

    LM Studio is much better put together if you want 1-click. Truly open solutions that are more DIY (and reward you with dramatically better performance from the understanding/learning) are the way if you have the time/patience to burn.

  • I hate to drone on about this again, but:

    • ollama is getting less and less open, and (IMO) should not be used. If that doesn't concern you, you should be using LM Studio anyway.
    • The model sizes they mention are mostly for old models no-one should be using. The only exception is a 70B MoE (Hunyuan), but I think ollama doesn't even support that?
    • The quantization methods they mention are (comparatively) primitive and low performance, not cutting edge.
    • It mentions q8_0 twice, nonsensically... Um, it makes me think this article is AI slop?

    I'm glad opensuse is promoting local LLM usage, but please... not ollama, and be more specific.

    And don't use ollama to write it without checking :/

  • making the most with what you have

    That was, indeed, the motto of ML research for a long time. Just hacking out more efficient approaches.

    It's people like Altman that introduced the idea of not innovating and just scaling up what you already have. Hence many in the research community know he's full of it.

  • Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the desktop GTX 280/CUDA 1.0. Maybe earlier.

    Most "AI accelerators" are basically the same thing these days: overgrown desktop GPUs. They have pixel shaders, ROPs, video encoders and everything, with the one partial exception being the AMD MI300X and beyond (which are missing ROPs).

    CPUs were used, too. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: https://www.servethehome.com/facebook-introduces-next-gen-cooper-lake-intel-xeon-platforms/

  • Machine learning has been a field for years, as others said, yeah, but Wikipedia would be a better expansion of the topic. In a nutshell, it's largely about predicting outputs based on trained input examples.

    It doesn't have to be text. For example, astronmers use it to find certain kinds of objects in raw data feeds. Object recognition (identifying things in pictures with little bounding boxes) is an old art at this point. Series prediction models are a thing, languagetool uses a tiny model to detect commonly confused words for grammar checking. And yes, image hashing is another, though not entirely machine learning based. IDK what Tineye does in their backend, but there are some more "oldschool" approaches using more traditional programming techniques, generating signatures for images that can be easily compared in a huge database.

    You've probably run ML models in photo editors, your TV, your phone (voice recognition), desktop video players or something else without even knowing it. They're tools.

    Seperately, image similarity metrics (like lpips or SSIM) that measure the difference between two images as a number (where, say, 1 would be a perfect match and 0 totally unrelated) are common components in machine learning pipelines. These are not usually machine learning based, barring a few execptions like VMAF (which Netflix developed for video).

    Text embedding models do the same with text. They are ML models.

    LLMs (aka models designed to predict the next 'word' in a block of text, one at a time, as we know them) in particular have an interesting history, going back to (If I even remember the name correctly) BERT in Google's labs. There were also tiny LLMS people did run on personal GPUs before ChatGPT was ever a thing, like the infamous Pygmalion 6B roleplaying bot, a finetune of GPT-J 6B. They were primitive and dumb, but it felt like witchcraft back then (before AI Bros marketers poisoned the well).

  • politics @lemmy.world

    Trump floats regime change in Iran

    World News @lemmy.world

    Israel bombs Iranian state TV during live broadcast

    United States | News & Politics @lemmy.ml

    Scoop: Four reasons Musk attacked Trump's "big beautiful bill"

    World News @lemmy.world

    Israel plans to occupy and flatten all of Gaza if no deal by Trump's trip

    LocalLLaMA @sh.itjust.works

    Qwen3 "Leaked"

    LocalLLaMA @sh.itjust.works

    Niche Model of the Day: Nemotron 49B 3bpw exl3

    LocalLLaMA @sh.itjust.works

    Niche Model of the Day: Openbuddy 25.2q, QwQ 32B with Quantization Aware Training

    Ask Lemmy @lemmy.world

    How do y'all post clips/animations on Lemmy? Only GIF seems to work.

    politics @lemmy.world

    Trump 2.0 initial approval ratings higher than in first term

    politics @lemmy.world

    Behind the Curtain: Meta's make-up-with-MAGA map

    Enough Musk Spam @lemmy.world

    Elon Musk's headline dominance squeezes other CEOs

    politics @lemmy.world

    Trump sides with Musk in H-1B fight

    politics @lemmy.world

    Elon Musk pledges "war" over H-1B visa program, calls opponents racists

    politics @lemmy.world

    Musk calls MAGA element "contemptible fools" as virtual civil war brews

    Technology @lemmy.world

    Shipping Listing Suggests 24GB+ Intel Arc B580

    Selfhosted @lemmy.world

    Guide to Self Hosting LLMs Faster/Better than Ollama

    LocalLLaMA @sh.itjust.works

    Qwen2.5: A Party of Foundation Models!

    Ask Lemmy @lemmy.world

    How does Lemmy feel about "open source" machine learning, akin to the Fediverse vs Social Media?

    World News @lemmy.world

    Pressure grows as "last chance" negotiations for Gaza deal resume

    News @lemmy.world

    Hostage-ceasefire deal talks stall over new Netanyahu demands, Israeli officials say