Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)NO
Posts
89
Comments
149
Joined
2 yr. ago

  • My biggest problem with vaping is that there's basically no distinction made between ecigarettes that this article addresses and vaping dry herbs.. would love to read up on it and any possible health concerns but rarely see it discussed

  • Good question, at the time I made it there wasn't a good option, and the one in the main repo is very comprehensive and overwhelming, I wanted to make one that was straight forward and easier to digest to see what's actually happening

  • The significance is we have a new file format standard, bad news is it breaks compatibility with the old format so you'll have to update to use newer quants and you can't use your old ones

    The good news is this is the last time that'll happen (it's happened a few times so far) as this one is meant to be a lot more extensible and flexible, storing a ton of extra metadata for extra compatibility

    The great news is that this paves the way for better model support as we've seen already with support for falcon being merged: https://github.com/ggerganov/llama.cpp/commit/cf658adc832badaaa2ca119fe86070e5a830f8f6

  • Thanks for the comment! Yes this is meant more for your personal projects than for using in existing projects

    The idea behind needing a password to get a password, totally understand, my main goal was to have local encrypted storage, the nice thing about this implementation is that you can have all your env files saved and shared in your git repo for all devs to have access to, but only can decrypt it if given the master password shared elsewhere (keeper, vault etc) so you don't have to load all values from a vault, just the master

    100% though this doesn't cover a large range of usage, hence the name "simple" haha, wouldn't be opposed to expanding but I think it covers my proposed use cases as-is

  • That is interesting though how you interpreted the question, I think the principle of "rate limiting" is playing in my favour here where typically when you rate limit something you don't throw it into a queue, you deny it and wait for the next request (think APIs)

  • Your best bet is likely going to be editing the original prompt to add information until you get the right output, however, you can also get clever with it and add to the response of the model itself. Remember, all it's doing is filling in the most likely next word, so you could just add extra text at the end that says "now, to implement it in X way" or "I noticed I made a mistake in Y, to fix that " and then hit generate and let it continue the sentence

  • LocalLLaMA @sh.itjust.works

    llama2.c: Inference Llama 2 in one file of pure C by Andrej Karpathy

    LocalLLaMA @sh.itjust.works

    Leaked GPT-4 Architecture: Demystifying Its Impact & The 'Mixture of Experts' Explained (with code)

    LocalLLaMA @sh.itjust.works

    Dolphin (based on Llama 1) released by Eric Hartford!

    Selfhosted @lemmy.world

    For people self hosting LLMs.. I have a couple docker images I maintain

    LocalLLaMA @sh.itjust.works

    chargoddard's frankensteined 22B llama2

    LocalLLaMA @sh.itjust.works

    My attempt at explaining group size and act order simply (but definitely not briefly)

    LocalLLaMA @sh.itjust.works

    My attempt to explain groupsize and act order in GPTQ

    LocalLLaMA @sh.itjust.works

    Llama-2, Mo’ Lora (proof of concept MOE of LoRAs)

    Android @lemdro.id

    Samsung teases latest foldables ahead of Unpacked | TechCrunch

    LocalLLaMA @sh.itjust.works

    Llama 2 - Meta AI

    LocalLLaMA @sh.itjust.works

    Retentive Network: A Successor to Transformer for Large Language Models

    LocalLLaMA @sh.itjust.works

    Finally got my shit together and made git repos of my docker images

    LocalLLaMA @sh.itjust.works

    llamacpp has added custom RoPE (#2054) · ggerganov/llama.cpp@6e7cca4

    LocalLLaMA @sh.itjust.works

    Open-Orca/OpenOrca-Preview1-13B · Hugging Face

    cats @lemmy.world

    Molly sits wherever she pleases

    LocalLLaMA @sh.itjust.works

    vLLM: Easy, Fast, and Cheap LLM Serving with PagedAttention

    LocalLLaMA @sh.itjust.works

    OpenOrca, an open-source dataset and series of instruct-tuned language models

    LocalLLaMA @sh.itjust.works

    Koboldcpp 1.33 released and dockerized

    Android @lemmy.world

    Google Pixel 8 leak points to deskop mode support

    Android @lemmy.world

    Qualcomm Announces Multi-Year Collaboration with Sony to Deliver Next Generation Smartphones