I really, REALLY like this idea of coming together with some cool people to actually make some progress for once, and not just to raise the bottom line this quarter.
When we get old, we don't just gain the ability to stop working, but we also lose the ability to do many things. Imagine what we would do, if we actually had that time RIGHT NOW!
More cool small projects would pop up everywhere, since those ideas don't need to be profitable anymore.
Honestly, give image toolbox a good try. It has a lot of good features and I wish I liked it more than I do. You might like it's workflow a lot more than I do.
It's really bad for image editing... But I haven't found many -ok- open image editors. This one has really bad text handling and subpar painting tools.
Generally: don't use it, but also what else are you gonna use? "Image Toolbox"? Fair choice, but I don't like it.
This prefix feature is already in Open Web UI! There is the "Playground", which lets you define any kind of conversation and also let it continue a message you started writing for it. The playground is really useful.
What exactly do you mean by "draft models"? I have never heard of that speculative decoding thing...
It indeed is one of these cases. Individual letter recognition doesn't work well, since they tend to be clumped up into larger "tokens", which then can't be differentiated anymore. That's also the same reason why it can't count the letters in words, It sees words as just a single, or at most three "tokens"
Could you please tell me why you chose kobold.cpp over llama.cpp? I only ever used llama.cpp so I'd like to hear from the other side!
I really like the idea of letting an LLM perform too calls into middle of the generation.
Like, we instruct the LLM to Say what it will do, then to put the tool call into
<tool>
</tool>
tags. Then we could set
</tool>
as a stop keyword and insert the results into it's message.
I have tries this before, but it tends to not believe what is in its own message. It tends to see the output of the tool cal and go Don't believe what I just said, I made that up, even though LLMs are infamous for hallucinating...
How did this post get three likes before I even added the image? Did people follow the link and think "Aw yeah, thanks for sharing this link, have a like"
He's illustrated so spot on. Very impressive.