Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)LE
Posts
0
Comments
116
Joined
2 yr. ago

  • I'd say an ally is someone you have an alliance with, so someone with who you have agreed to pursue a common goal. So yeah, I'd say if you are someone's ally, they are also yours.

    That differs somewhat from how it's used in the LGBT+ community, where it refers to non-LGBT+ supporters of LGBT+ rights.

  • If you own housing that you rent out more than you use it yourself, you're a landlord.

    If you rent out your house or apartment while you're on vacation, I wouldn't call you a landlord. But if you have a house or apartment that you only ever offer on AirBNB without ever using it yourself, you're a landlord.

    Btw, I don't agree that being a landlord makes you deserving of a guillotine, but I do agree that we should limit the ownership of housing to natural persons, with a limit on how much space a person can own.

  • Then you probably realize that the issue lies with you rather than all those young men.

    I'm pointing this out because all this reminds me of a situation I have been in a few times, where boomers and gen X-ers attacked me for being bald, as some of them associate being bald with being a nazi. Sorry ma'am, I'm just bald. If that makes you think I'm a nazi, that's on you.

  • Ah, the classic "old person complaining about fashion trends among the youth".

    Have you considered that there is nothing objectively hideous about these moustaches, and that the reason you think they are hideous is most likely due to associations and experiences that you have, but younger people don't?

    I noticed that you called moustaches "creepy" multiple times. What's up with that? What do you associate with moustaches that makes them creepy?

    And, if having a beard and a moustache is ok, why is having a moustache without a beard so bad?

    Honestly, you write that you are questioning Gen Z's sanity, but your ranting doesn't exactly make you sound sane yourself 😅

  • I think pretty much everyone would agree that's bad. However, I don't think we'll ever get to the point where we recognize a machine might be capable of suffering. There is no way of proving anything, biological or not, has a consciousness and the capability to suffer. And with AI being so different from us, I believe most people would simply disregard the idea.

    Heck, look at the way we treat animals. A pig's brain is very similar to our own. Nociceptors, the nerve cells responisble for pain in humans, can also be found in most animals, but we don't care. We kill 4 million pigs every day, and 200 million chickens. No mass murder in the history of mankind even gets close to that.

    The sad truth is, most people only care about their wellbeing, and that of their friends and family. Even other humans don't matter, as long as they're strangers. Otherwise people wouldn't be hoarding wealth like that, while hundreds of millions of people around the world are starving.

    Ah sorry, I kinda started ranting. Yes, I'd care.

  • I feel like the term "Incel" has developed somewhat. I guess in the beginning it was used by men struggeling to get into a relationship to refer to themselves. It was used to find others with the same issues, and form a kind of self help group that could provide comfort and maybe even inprovement. If thats what incels were today, they wouldn't be hated like that. Perhaps they would be belittled or made fun of.

    But that's not what we understand incels to be today. Incels now seem to be extremely bitter, delusional, pathetic individuals. They don't recognize the issue lies with themselves, instead it's supposedly the fault of the women, who won't accept their place in society. In the mind of an incel, they deserve to have sex, and that means that it should be a woman's duty to please them.

    So no, you are probably not an incel, even if perhaps you would have been under the original definition.

  • LLMs are absolutely complex, neural nets ARE somewhat modelled after human brains after all, and trying to understand transformers or LSTMs for the first time is a real pain. However, what a NN can do, or rather what it has been trained to do depends almost entirely on the loss function used. The complexity of the architecture and the training dataset don't change what a LLM can do, only how good it is at doing that, and how well it generalizes. The loss function used for the training of LLMs simply evaluates whether the predicted tokens fit the actual ones. That means that an LLM is trained to predict words from other words, or in other words, to model language.

    The loss function does not evaluate the validity of logical statements, though. All reasoning that an LLM is capable of, or seems to be capable of, emerges from its modelling of language, not an actual understanding of logic.

  • Honestly I feel that claiming a LLM can reason is an outrageous claim that needs to be proofed/cited, not the other way around. "My Hamster can reason, your claim that it can't is outrageous and the burden of proof lies with you."

  • Ok, maybe I didn't make my point clear: Yes they can produce a text in which they reason. However, that reasoning mimics the reasoning found in the training data. The arguments a LLM makes and the stance it takes will always reflect its training data. It cannot reason counter to that.

    Train a LLM on a bunch of english documents and it will suggest nuking Russia. Train it on a bunch of Russian documents and it will suggest nuking the West. In both cases it has learned to "reason", but it can only reason within the framework it has learned.

    Now if you want to find a solution for world peace, I'm not saying that AI can't do that. I am saying that LLMs can't. They don't solve problems, they model language.

  • LLMs are trained to see parts of a document and reproduce the other parts, that's why they are called "language models".

    For example, they might learn that the words "strawberries are" are often followed by the words "delicious", "red", or "fruits", but never by the words "airplanes", "bottles" or "are".

    Likewise, they learn to mimic reasoning contained in their training data. They learn the words and structures involved in an argument, but they also learn the conclusions they should arrive at. If the training dataset consists of 80 documents arguing for something, and 20 arguing against it (assuming nothing else differentiates those documents (like length etc.)), the LLM will adopt the standpoint of the 80 documents, and argue for that thing. If those 80 documents contain flawed logic, so will the LLM's reasoning.

    Of course, you could train a LLM on a carefully curated selection of only documents without any logical fallacies. Perhaps, such a model might be capable of actual logical reasoning (though it would still be biased by the conclusions contained in the training dataset)

    But to train an LLM you need vasts amount of data. Filtering out documents containing flawed logic does not only require a lot of effort, it also reduces the size of the training dataset.

    Of course, that is exactly what the big companies are currently researching and I am confident that LLMs will only get better over time, but the LLMs of today are trained on large datasets rather than perfect ones, and their architecture and training prioritize language modelling, not logical reasoning.

  • It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset's authors do.