Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)SI
Posts
0
Comments
119
Joined
4 mo. ago

  • I think whats really happening behind the scenes is that the model you’re talking to makes a function call to another model that generates the image.

    I haven’t seen it either so if you want that and don’t want to code it might be best to stick with paid, but something like that could easily exist somewhere else.

  • FWIW speech to text works really well on Apple stuff.

    I’m not exactly sure what info you’re looking but: my gaming PC is headless and sits in a closet. I run ollama on that and I connect to it using a client called “ChatBox”. It’s got a gtx 3060 which fits the whole model, so it’s reasonably fast. I’ve tried the 32b model and it does work but slowly.

    Honestly, ollama was so easy to setup, if you have any experience with computers I recommend giving it a shot. (Could be a great excuse to get a new gpu 😉)

  • It’s a trade off for sure. I think the area editors like Vim totally win in is when you need to ssh into a server and edit something. I think it will always exist because of this use case

  • You want war? Read a book on guerrilla warfare. It’s about hearts and minds. You’re shooting yourself in the foot by arguing with people who largely agree with you.

    Why can’t you join an open and honest discussion about that to do? Is it weakness 🤔