Skip Navigation

User banner
maria [she/her]
Posts
56
Comments
1,886
Joined
2 yr. ago

  • heyyyyyy... maybe don use this aggressive word... maybemaybs... - i jus don like it for som reason...

    also, nuuuuu don just say cutie like that!!! >< That's like - im used to reading it in English but not in ~german...

    also also, who else did u find here who u would call that word an who is also from the grrrmn land?

  • eh, Rheinland Pfalz, near Mainz. how bout u? <3

  • how is this the very first post i made? insane.

    anyway, i hope u have a lovely day and that u eat something tasty <3

  • its all about peeps sharing their personal info on these chatbots. Many peeps do, so I guess it kinda makes sense.

  • it would also be super slow, u usually want a GPU for LLM inference.. but u already know this, u are Gandald der zwölfte after all <3

  • something something 🇷🇺 🇨🇳 🪖 or whatever...

  • ruler

    Jump
  • yes, sorry. i just don't like using those words which remind of terrible times ;(

    also yes, even more sorri, i should have done a search thingy.... oki fine, imma do that next time ;(

  • ruler

    Jump
  • FTFY

    what does that mean? i said that we didn't see him doing the move before many models were finished training. so these models literally cannot know that this happened.

  • RAMule

    Jump
  • okay but why this is disgusting, ewewewewewewewewew

  • ruler

    Jump
  • have you activated the search functionality? Otherwise it will not know what you are talking about.

  • ruler

    Jump
  • fair, if u wanna see it that way, ai is bad... just like many other technologies which are being used to do bad stuffs.

    yes, ai used for bad is bad. yes, guns used for bad is bad. yes, computers used for bad - is bad.

    guns are specifically made to hurt people and kill them, so that's kinda a different thing, but ai is not like this. it was not made to kill or hurt people. currently, it is made to "assist the user". And if the owners of the LLMs (large language models) are pro-elon, they might train in the idea that he is okay actually.

    but we can do that too! many people finetune open models to respond in "uncensored" ways. So that there is no gate between what it can and can't say.

  • ruler

    Jump
  • LLMs (large language models) are not some oracle, which magically knows all the latest news. however: if you activate search functionality, it will look up things online about it, likely find some article about it, and recognize that reality has moved on since 2023.

  • ruler

    Jump
  • or like - any "uncensored" model from the community. those are fun.

  • ruler

    Jump
  • the training process being shiddy i completely agree with. that is simply awful and takes a shidload of resources to get a good model.

    but... running them... feels oki to me.

    as long as you're not running some bigphucker model like GPT4o to do something a smoler model could also do, i feel it kinda is okay.

    32B parameter size models are getting really, really good, so the inference (running) costs and energy consumption is already going down dramatically when not using the big models provided by BigEvilCo™.

    Models can clearly be used for cool stuff. Classifying texts is the obvious example. Having humans go through that is insane and cost-ineffective. Meanwhile models can classify multiple pages of text in half a second with a 14B parameter (8GB) model.

    obviously using bigphucker models for everything is bad. optimizing tasks to work on small models, even at 3B sizes, is just more cost-effective, so i think the general vibe will go towards that direction.

    people running their models locally to do some stuff will make companies realize they don't need to pay 15€ per 1.000.000 tokens to OpenAI for their o1 model for everything. they will realize that paying like 50 cents for smaller models works just fine.

    if i didn't understand ur point, please point it out. i'm not that good at picking up on stuff..

  • big sad :(

    wish it would be nice and easi to do stuff like this - yea hosting it somewhere is probably best for ur moni and phone.

  • ruler

    Jump
  • that's why u gotta not use some companies offering!

    yes, centralized AI bad, no shid.

    PLENTY good uncensored models on huggingface.

    recently Dolphin 3 looks interesting.

  • ruler

    Jump
  • ?

    we didn't see him doing the hand move before many models started training, so it doesn't have the background.

    LLMs can do cool stuff, it's just being used in awful and boring ways by BigEvilCo™️.

  • i kno! i'm already running a smol llama model on the phone, and yeaaaa that's a 2 token per second speed and it makes the phone lag like crazy... but it works!

    currently i'm doing this with termux and ollama, but if there's some better foss way to run it, i'd be totally happy to use that instead <3

  • nono, the whole thing is about some people putting personal info into these chatbots.

    and even if not, they are guaranteed to train their newer models on the requests and generated responses.

    if ur putting personal info, running in locally/privately is kinda a must, if u care about security at all.

    i think peeps try lewd prompts once, then find out it doesn't work, and then give up. (they don't know about huggingface)

  • apparently not. it seems they are refering to the official bs deepseek ui for ur phone. running it on your phone fr is super cool! Imma try that out now - with the smol 1.5B model

  • 196 @lemmy.blahaj.zone

    onward and upward rule

    Blahaj Lemmy Meta @lemmy.blahaj.zone

    why this?

    Linux @lemmy.ml

    Gamepad not communicating with Game

    196 @lemmy.blahaj.zone

    chocolate 8 rule

    Mastodon @lemmy.ml

    Weird behaviour in the app

    196 @lemmy.blahaj.zone

    unpacking rule

    Linux @lemmy.ml

    loads of uninstallable dependencies on debian

    196 @lemmy.blahaj.zone

    if there's no rule, there is no me.

    Linux @lemmy.ml

    odd scaling issue with lmms

    Linux @lemmy.ml

    Debian 12 not booting, but OS intact

    196 @lemmy.blahaj.zone

    reject brand, embrace rule

    196 @lemmy.blahaj.zone

    ThinkLight rule

    Linux @lemmy.ml

    SystemD not installing on manjaro (xfce)?

    196 @lemmy.blahaj.zone

    They changed the Twitter logo rule (again again)

    196 @lemmy.blahaj.zone

    Hmmm 🤔

    196 @lemmy.blahaj.zone

    bustin' rule