Currently reading news and communicating with people around the world from the privacy of my toilet using my hand terminal. It can also understand what I am saying and excecute my spoken commands (to some extent at least). That's some Sci fi shit right there. Pun intended
Oh yeah, mokkabønner are awesome! And I don't really enjoy dark chocolate that much in general. I get really sad every time I have gotten Kremtopper by accident. Why are those boxes so damn similar anyway? And often stocked in the same shelf as well..
I don't agree. I'm Norwegian though, so there is a bit of a difference in how our chocolate is made with regards to additives. Our Milk chocolate is the shit! Additives to it is also great, but they come in the form of actual pieces of dried fruit or salted corn chips, tiny pieces of fudge, brownies etc, etc.
I will be using sonarr and maybe others as well. I have set it up now. I liked prowlarr a lot. I have a language specific torrent site I have added to it and it seems like prowlarr is the only place I can select exactly which language to download.
I haven't set up jellyserr yet, so I haven't seen it in action yet, but if it works the way I understand it should, it is that when media is downloaded it is automatically updated into your jellyfin server.
See the reply by @zewm@zewm@lemmy.zip. Jellyserr is the gui where you search for the media. It then sends radarr/sonarr/... the request which works along prowlarr to download the usenet or torrent file and makes your usenet/torrent client download it. When the download is complete it is moved to the correct place, renamed by your rules etc and inserted into your media server.
Radarr is for movies, Sonarr is for shows, you have one for audiobooks, one for music, comics etc.
Im gonna give llamafile a go! I want to try to run it at least once with a different set of weights just to see it work and also see different weights handle the same inputs.
The reason I am asking about training is because of my work where fine tuning our own is going to come knocking soon, so I want to stay a bit ahead of the curve. Even though it already feels like I am late to the party.
It's basically github for large language models.