Skip Navigation

User banner
Posts
52
Comments
975
Joined
2 yr. ago

  • Your welcome! Its good to have a local DRM free backup of your favorite channels especially the ones you come back to for sleep aid or comfort when stressed. I would download them sooner than later while yt-dlp still works (googles been ramping up its war on adblockers and 3rd party frontends). BTW yt-dlp also works with bandcamp and soundcloud to extract audio :)

  • It depends on how nerdy you want to get.

    Generally no, That checkmark is cloudflares multi-million dollar product that protects half of the internet from bot abuse. If there were a quick and easy work around then the bots would quickly learn and adopt it too. Sadly, it has a real penchant for triggering for regular users when you're using hardened browsers and VPNs as well. It would be nice if network admins cared a little more about sticking it to the man and using bot protection alternatives instead of just proxying through a cloudflare tunnel and calling it a day.

    If you're slightly tech savvy and lucky then the site allows web crawlers/indexers or has RSS feeds. This lets you use an RSS feeder or a text scraper like NewsWaffle

    What youre describing with saving a cookie to prove your identity is actually do-able, yt-dlp uses this to download age restricted content but getting that cookie extracted into a text file is a non-trivial nerdy thing the average person cant be trusted to do.

  • I don't have a lot of knowledge on the topic but happy to point you in good direction for reference material. I heard about tensor layer offloading first from here a few months ago. In that post is linked another to MoE expert layer offloadingI highly recommend you read through both post. MoE offloading it was based off

    The gist of the Tensor Cores strategy is Instead of offloading entire layers with --gpulayers, you use --overridetensors to keep specific large tensors (particularly FFN tensors) on CPU while moving everything else to GPU.

    This works because:

    • Attention tensors: Small, benefit greatly from GPU parallelization
    • FFN tensors: Large, can be efficiently processed on CPU with basic matrix multiplication

    You need to figure out which cores exactly need to be offloaded for your model looking at weights and cooking up regex according to the post.

    Heres an example of a kobold startup flags for doing this. The key part is the override tensors flag and the regex contained in it

     
        
    python ~/koboldcpp/koboldcpp.py --threads 10 --usecublas --contextsize 40960 --flashattention --port 5000 --model ~/Downloads/MODELNAME.gguf --gpulayers 65 --quantkv 1 --overridetensors "\.[13579]\.ffn_up|\.[1-3][13579]\.ffn_up=CPU"
    ...
    [18:44:54] CtxLimit:39294/40960, Amt:597/2048, Init:0.24s, Process:68.69s (563.34T/s), Generate:56.27s (10.61T/s), Total:124.96s
    
      

    The exact specifics of how you determine which tensors for each model and the associated regex is a little beyond my knowledge but the people who wrote the tensor post did a good job trying to explain that process in detail. Hope this helps.

  • I would recommend you get a cheap wattage meter that plugs inbetween wall outlet and PSU powering your cards for 10-15$ (the 30$ name brand kill-a-watts are overpriced and unneeded IMO). You can try to get rough approximations doing some math with your cards listed TPD specs added together but that doesn't account for motherboard, cpu, ram, drives, so on all and the real change between idle and load. With a meter you can just kind of watch the total power draw with all that stuff factored in, take note of increase and max out as your rig inferences a bit. Have the comfort of being reasonably confident in the actual numbers. Then you can plug the values in a calculation

  • I have not tried any models larger than very low quant qwen 32b . My personal limits for partial offloading speeds are 1 tps and the 32b models encroach on that. Once I get my vram upgraded from 8gb to 16-24gb ill test the waters with higher parameters and hit some new limits to benchmark :) I haven't tried out MoE models either, I keep hearing about them. AFAIK they're popular with people because you can do advanced partial offloading strategies between different experts to really bump the token generation. So playing around with them has been on my ml bucket list for awhile.

  • Oh, I LOVE to talk, so I hope you don't mind if I respond with my own wall of text :) It got really long, so I broke it up with headers.

    TLDR: Bifurcation is needed because of how fitting multiple GPUs on one PCIe x16 lane works and consumer CPU PCIe lane management limits. Context offloading is still partial offloading, so you'll still get hit with the same speed penalty—with the exception of one specific advanced partial offloading inference strategy involving MoE models.

    CUDA

    To be clear about CUDA, it's an API optimized for software to use NVIDIA cards. When you use an NVIDIA card with Kobold or another engine, you tell it to use CUDA as an API to optimally use the GPU for compute tasks. In Kobold's case, you tell it to use cuBLAS for CUDA.

    The PCIe bifurcation stuff is a separate issue when trying to run multiple GPUs on limited hardware. However, CUDA has an important place in multi-GPU setups. Using CUDA with multiple NVIDIA GPUs is the gold standard for homelabs because it's the most supported for advanced PyTorch fine-tuning, post-training, and cutting-edge academic work.

    But it's not the only way to do things, especially if you just want inference on Kobold. Vulkan is a universal API that works on both NVIDIA and AMD cards, so you can actually combine them (like a 3060 and an AMD RX) to pool their VRAM. The trade-off is some speed compared to a full NVIDIA setup on CUDA/cuBLAS.

    PCIe Bifurcation

    Bifurcation is necessary in my case mainly because of physical PCIe port limits on the board and consumer CPU lane handling limits. Most consumer desktops only have one x16 PCIe slot on the motherboard, which typically means only one GPU-type device can fit nicely. Most CPUs only have 24 PCIe lanes, which is just enough to manage one x16 slot GPU, a network card, and some M.2 storage.

    There are motherboards with multiple physical x16 PCIe slots and multiple CPU sockets for special server-class CPUs like Threadrippers with huge PCIe lane counts. These can handle all those PCIe devices directly at max speeds, but they're purpose-built server-class components that cost $1,000+ USD just for the motherboard. When you see people on homelab forums running dozens of used server-class GPUs, rest assured they have an expensive motherboard with 8+ PCIe x16 slots, two Threadripper CPUs, and lots of bifurcation. (See the bottom for parts examples.)

    Information on this stuff and which motherboards support it is spotty—it's incredibly niche hobbyist territory with just a couple of forum posts to reference. To sanity check, really dig into the exact board manufacturer's spec PDF and look for mentions of PCIe features to be sure bifurcation is supported. Don't just trust internet searches. My motherboard is an MSI B450M Bazooka (I'll try to remember to get exact numbers later). It happened to have 4x4x4x4 compatibility—I didn't know any of this going in and got so lucky!

    For multiple GPUs (or other PCIe devices!) to work together on a modest consumer desktop motherboard + CPU sharing a single PCIe x16, you have to:

    1. Get a motherboard that allows you to intelligently split one x16 PCIe lane address into several smaller-sized addresses in the BIOS
    2. Get a bifurcation expansion card meant for the specific splitting (4x4x4x4, 8x8, 8x4x4)
    3. Connect it all together cable-wise and figure out mounting/case modification (or live with server parts thrown together on a homelab table)

    A secondary reason I'm bifurcating: the used server-class GPU I got for inferencing (Tesla P100 16GB) has no display output, and my old Ryzen CPU has no integrated graphics either. So my desktop refuses to boot with just the server card—I need at least one display-output GPU too. You won't have this problem with the 3060. In my case, I was planning a multi-GPU setup eventually anyway, so going the extra mile to figure this out was an acceptable learning premium.

    Bifurcation cuts into bandwidth, but it's actually not that bad. Going from x16 to x4 only results in about 15% speed decrease, which isn't bad IMO. Did you say you're using a x1 riser though? That splits it to a sixteenth of the bandwidth—maybe I'm misunderstanding what you mean by x1.

    I wouldn't obsess over multi-GPU setups too hard. You don't need to shoot for a data center at home right away, especially when you're still getting a feel for this stuff. It's a lot of planning, money, and time to get a custom homelab figured out right. Just going from Steam Deck inferencing to a single proper GPU will be night and day. I started with my decade-old ThinkPad inferencing Llama 3.1 8B at about 1 TPS, and it inspired me enough to dig out the old gaming PC sitting in the basement and squeeze every last megabyte of VRAM out of it. My 8GB 1070 Ti held me for over a year until I started doing enough professional-ish work to justify a proper multi-GPU upgrade.

    Offloading Context

    Offloading context is still partial offloading, so you'll hit the same speed issues. You want to use a model that leaves enough memory for context completely within your GPU VRAM. Let's say you use a quantized 8B model that's around 8GB on your 12GB card—that leaves 4GB for context, which I'd say is easily about 16k tokens. That's what most lower-parameter local models can realistically handle anyway. You could partially offload into RAM, but it's a bad idea—cutting speed to a tenth just to add context capability you don't need. If you're doing really long conversations, handling huge chunks of text, or want to use a higher-parameter model and don't care about speed, it's understandable. But once you get a taste of 15-30 TPS, going back to 1-3 TPS is... difficult.

    MoE

    Note that if you're dead set on partial offloading, there's a popular way to squeeze performance through Mixture of Experts (MoE) models. It's all a little advanced and nerdy for my taste, but the gist is that you can use clever partial offloading strategies with your inferencing engine. You split up the different expert layers that make up the model between RAM and VRAM to improve performance—the unused experts live in RAM while the active expert layers live in VRAM. Or something like that.

    I like to talk (in case you haven't noticed). Feel free to keep the questions coming—I'm happy to help and maybe save you some headaches.

    Oh, in case you want to fantasize about parts shopping for a multi-GPU server-class setup, here are some links I have saved for reference. GPUs used for ML can be fine on 8 PCI lanes (https://www.reddit.com/r/MachineLearning/comments/jp4igh/d_does_x8_lanes_instead_of_x16_lanes_worsen_rtx/)

    A Threadripper Pro has 128 PCI lanes: (https://www.amazon.com/AMD-Ryzen-Threadripper-PRO-3975WX/dp/B08V5H7GPM)

    You can get dual sWRX8 motherboards: (https://www.newegg.com/p/pl?N=100007625+601362102)

    You can get a PCIe 4x expansion card on Amazon: (https://www.amazon.com/JMT-PCIe-Bifurcation-x4x4x4x4-Expansion-20-2mm/dp/B0C9WS3MBG)

    All together, that's 256 PCI lanes per machine, as many PCIe slots as you need. At that point, all you need to figure out is power delivery.

  • Thanks for being that guy, good to know. Those specific numbers shown were just done tonight with DeepHermes 8b q6km (finetuned from llama 3.1 8b) with max context at 8192, in the past before I reinstalled I managed to squeeze ~10k context with the 8b by booting without a desktop enviroment. I happen to know that DeepHermes 22b iq3 (finetuned from mistral small) runs at like 3 tps partially offloaded with 4-5k context.

    Deephermes 8b is the fast and efficient general model I use for general conversation, basic web search, RAG, data table formatting/basic markdown generation, simple computations with deepseek r1 distill reasoning CoT turned on.

    Deephermes 22b is the local powerhouse model I use for more complex task requiring either more domain knowledge or reasoning ability. For example to help break down legacy code and boilerplate simple functions for game creation.

    I have vision model + TTS pipeline for OCR scanning and narration using qwen 2.5vl 7b + outetts+wavtokenizer which I was considering trying to calculate though I need to add up both the llm tps and the audio TTS tps.

    I plan to load up a stable diffusion model and see how image generation compares but the calculations will probably be slightly different.

    I hear theres one or two local models floating around that work with roo-cline for the advanced tool usage, if I can find a local model in the 14b range that works with roo even if just for basic stuff it will be incredible.

    Hope that helps inform you sorry if I missed something.

  • LocalLLaMA @sh.itjust.works

    How to calculate cost-per-tokens output of local model compared to enterprise model API access

  • No worries :) a model fully loaded onto 12gb vram on a 3060 will give you a huge boost around 15-30tps depending on the bandwidth throughput and tensor cores of the 3060. Its really such a big difference once you get a properly fit quantized model your happy with you probably won't be thinking of offloading ram again if you just want llm inferencing. Check to make sure your motherboard supports pcie burfurcation before you make any multigpu plans I got super lucky with my motherboard allowing 4x4x4x4 bifuraction for 4 GPUs potentially but I could have been screwed easy if it didnt.

  • I just got second PSU just for powering multiple cards on a single bifurcated pcie for a homelab type thing. A snag I hit that you might be able to learn from: PSUs need to b turned on by the motherboard befor e being able to power a GPU. You need a 15$ electrical relay board that sends power from the motherboard to the second PSU or it won't work.

    Its gonna be slow as molasses partially offloaded onto regular ram no matter what its not like ddr4 vs ddr3 is that much different speed wise. It might maybe be a 10-15% increase if that. If your partially offloading and not doing some weird optimized MOE type of offloading expect 1-5token per second (really more like 2-3).

    If youre doing real inferencing work and need speed then vram is king. you want to fit it all within the GPU. How much vram is the 3060 youre looking at?

  • points in a direction your mind can't understand, with the finger seeming to disappear as it bisects into a higher plane

    "over there to the left after the bathrooms"

  • Hi, electrical systems engineer with an offgrid solar system powering fans I tested with meters signing in.

    The typical fans you can buy in consumer stores are about 100W on average a little less on low aroubd 80w a little more on high like 110-120w.

    They make more energy efficient fans, particularly brushless motor DC powered fans meant for marine boating power systems are incredibly energy efficient and quiet but they're also incredibly expensive.

    Also keep in mind consumer fans kind of suck compared to a true industrial fan which can take a lot more power for serious wind speed output which the Wikipedia for this device says improves efficiency of purification. You can get power tool industrial fans that run off dewalt tool type batteries that are low DC voltage but high amperage, they'll be more powerful than typical consumer fans too but run out of juice battery wise within hours.

    I personally like the 10-15watt DC fans with pass through USBC charging for personal cooling but thats not what were talking about.

  • The defiance thing itself is induced by being a half-sentient talking ape that doesnt like being told things like "No!" "Thats bad, dont do that!" In terms of their actions. The antivaxers also like to layer in government mistrust and religious faith levels of i-must-be-right but the core of defiance is just that people dont like feeling shitty and have poor instinctual coping methods for it.

    Its a fairly universal copout for humans to avoid feelings of shame and embarrassment or punishment. Nobody on the planet likes feeling stupid or in the wrong. So when confronted with these feelings some common stategies are out right lying to others or ourselves, or constructing a delusional warped narrative subjective reality in which they're actually somehow in the right through mental gymnastics, or by rejecting the claim of wrongness through formal argument, or using social dominance plays to forcefully dismiss claims of wrongdoing.

    Usually people find a strange mixture of multiple of these strategies in emotional combative arguments and coping with their aftermath.

    In an ideal world people would be able to instantly accept blame and fault for wrongdoing, eat he feelings of shame and self improve without trying to weasle their way out of punishment if they think they can get away with it. A properly adjusted adult who has eaten their fair share of humble pie over heir life encroaches on that kind of humility and willingness to update their assumptions given evidence, is able to say "oh shit I was wrong about that, huh? My bad." Or "the science nerds who study these things probably have more authority on the matter than I do, I'll default to their ideas." This however, is not an ideal world.

    Children have zero impulse control and are barely sentient enough to construct coherent sentences or internal monologues. Their tiny brains are still developing and so they get a free pass more or less when it comes to being defiant little shits defaulting to instinctual ape copout strategies. all kids have their bad moments and are still learning. The problem is when they never stop defaulting to these things in adulthood.

  • — try long pressing your phone keyboards hyphen key.

  • From the post title, description, and other peoples comments, I took away that the meme is m9re about suspecting your ex didnt even write their own breakup message based off the use of em dashes.

    Its a cute surface level joke but it touches in a real nerve because Its becoming more and more common for you to be falsely accused of being an LLM and being told to "ignore all previous instructions and (some stupid instruction) based off small writing quirks like using em or markdown and top comments share this frustration too.

    I shouldn't have to feel self conscious about the way I write

     
        
    Just to pass armchair llm detector wannabe vibe checks 🖕. 
    
      
  • I'm a markdown nerd who likes to use headers to break up longer post and sometimes properly buletpoint or put ASCII art in preformatted boxes. People who thinks they have the magic sauce on LLM generation detection because a post goes out of their way to do more than the bre minimum with punctuation or formatting is an asshole.

  • pics @lemmy.world

    A vibrantly rainy sunset

  • What does an MCP server do?

  • LocalLLaMA @sh.itjust.works

    Homelab upgrade WIP

  • The point in time after the first qbit based supercomputers transitioned from theoretical abstraction to physical proven reality. Thus opening up the can-of-worms of feasabily cracking classical cryptographic encryptions like an egg within human acceptable time frames instead of longer-than-the-universes-lifespan timeframes.. Thanks, superposition probability based parallel computations.

  • Thank you for deciding to engage with our community here! You're in good company.

    Kobold just released a bunch of tools for quant making you may want to check out.

    Kcpp_tools

    I have not made my own quants. I usually just find whatever imatrix gguf bartowlski or the other top makers on HF release.

    I too am in the process of upgrading my homelab and opening up my model engine as a semi public service. The biggest performance gains ive found are using CUDA and loading everything in vram. So far just been working with my old nvidia 1070ti 8gb card.

    Havent tried vllm engine just kobold. I hear good things about vllm it will be something to look into sometime. I'm happy and comfortable with my model engine system as it got everything setup just the way I want is but I'm always open to performance optimization.

    If you havent already try running vllm with its CPU nicencess set to highest priority. If vllm can use flash attention try that too.

    I'm just enough of a computer nerd to get the gist of technical things and set everything up software/networking side. Bought a domain name, set up a web server and hardened it. Kobolds webui didnt come with https SSL/TLS cert handling so I needed to get a reverse proxy working to get the connection properly encrypted.

    I am really passionate about this even though so much of the technical nitty gritty under the hood behind models goes over my head. I was inspired enough to buy a p100 Tesla 16gb and try shoving it into an old gaming desktop which is my current homelab project. I dont have a lot of money so this was months of saving for the used server class GPU and the PSU to run it + the 1070ti 8gb I have later.

    The PC/server building hardware side scares me but I'm working on it. I'm not used to swapping parts out at all. when I tried to build my own PC a decade ago it didnt last long before something blew so there's a bit of residual trauma there. I'm worried about things not fit right in the case, or destroying something or the the card not working and it all.

    Those are unhealthy worries when I'm trying to apply myself to this cutting edge stuff. I'm really trying to work past that anxiety and just try my best to install the stupid GPU. I figure if I fail I fail thats life it will be a learning experience either way.

    I want to document the upgrade process journey on my new self hosted site. I also want to open my kobold service to public use by fellow hobbyist. I'm not quite confident in sharing my domain on the public web though just yet I'm still cooking.

  • Coincidentally the same name as my geometry themed experimental grunge rock band

  • Ask Lemmy @lemmy.world

    Whats a better name for 'graphics cards' that describes the kind of computational work it does

    LocalLLaMA @sh.itjust.works

    MistralAI releases Magistral, their first official reasoning models. magistral small 2506 released under apache 2.0 license!

    Selfhosted @lemmy.world

    Got any security advice for setting up a locally hosted website/external service?

    Ask Lemmy @lemmy.world

    Got any security advice for setting up a locally hosted website/external service?

    LocalLLaMA @sh.itjust.works

    Updated guidelines for c/LocaLLama (new rules)

    LocalLLaMA @sh.itjust.works

    DeepSeek just released updated r1 models with 'deeper and more complex reasoning patterns'. Includes a r1 distilled qwen3 8b model boasting "10% improved performance" over original

    Ask Lemmy @lemmy.world

    Advice for picking a PSU for server class GPUs? Also a question about adapter cable

    LocalLLaMA @sh.itjust.works

    Using local model with basic RAG to help reference rules when playing table top game

    Ask Lemmy @lemmy.world

    Will the motherboard in my decade old desktop pc work with any new graphics card?

    LocalLLaMA @sh.itjust.works

    Has your local thinking model had an 'Aha!' moment similar to the one in Deepeek R1 papers?

    LocalLLaMA @sh.itjust.works

    Anthropic's 'On the Biology of a LLM' got a massive update: Features fascinating deep dives into how models process information behind the scenes

    LocalLLaMA @sh.itjust.works

    NousResearch is quietly cooking some fascinating stuff presumably for full release of DeepHermes!

    LocalLLaMA @sh.itjust.works

    llama4 release discussion thread

    linuxmemes @lemmy.world

    Better watch out when those windows-fanboy silicon lifeforms start talking shit on my favorite operating system family.

    LocalLLaMA @sh.itjust.works

    Timelapse of our current LocaLLaMA community thumbnail llama creation

    LocalLLaMA @sh.itjust.works

    Latest release of kobold.cpp adds tts voice cloning support via OuteTTS, updates multimodal vision mmproj projectors for Qwen2.5 VL

    linuxmemes @lemmy.world

    Linux Hemp is a new stoner-based fork of Linux Mint