Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TR
Posts
0
Comments
334
Joined
2 yr. ago

  • And now imagine if instead of making new schools in places where everybody needs to be driven there either by car or by bus we build them so the majority would walk or bike as it is the more convient option. Other countries like Japan can imagine. Turns out it's actually better to walk/bike to school even who knew!

  • Not decaying. The Nazis were always fascist they put on a front of being progressive to ganrner support which worked quite well as we can tell from history. By the time it became obvious they weren't really progressive they were already in power.

  • We have never seen an actual communist country. USSR for example was a fascist dictatorship which runs directly counter to the first property of communism, it must be stateless.

    Facists like the Nazis like to claim they are for the people and sadly the only "communism" we've seen so far has been carried out by their hands. This is similar to how Nazis were supposedly progressive... Hopefully we can agree that is obviously not the case.

  • So why can it often output correct information after it has been corrected? This should be impossible according to you.

    It generally doesn't. It apologizes then will output exactly, very nearly the same thing as before, or something else that's wrong in a brand new way. Have you used GPT before? This is a common problem, it's part of why you cannot trust anything it outputs unless you already know enough about the topic to determine it's accuracy.

    No, LLMs understand a tree to be a complex relationship of many, many individual numbers. Can you clearly define how our understanding is based on something different?

    And did you really just go "nuh huh its actually in binary"? I used the collection of symbols explanation as that's how OpenAI describes it so I thought it was a safe to just skip all the detail. Since it's apparently needed and you're unlikely to listen to me there's a good explanation in video form created by Kyle Hill. I'm sure many other people have gone and explained it much better than I can so instead of trying to prove me wrong which we can keep doing all day go learn about them. LLMs are super interesting and yet ultimately extremely primative.

  • We understand a tree to be a growing living thing, an LLM understands a tree as a collection of symbols. When they create output they don't decide that one synonym is more appropriate than another, it's chosen by which collection of symbols is more statistically likely.

    Take for example attempting to correct GPT, it will often admit fault yet not "learn" from it. Why not? If it understands words it should be able to, at least in that context, no longer output the incorrect information yet it still does. It doesn't learn from it because it can't. It doesn't know what words mean. It knows that when it sees the symbols representing "You got {thing} wrong" the most likely symbols to follow represent "You are right I apologize".

    That's all LLMs like GPT do currently. They analyze a collection of symbols (not actual text) and then output what they determine to be most likely to follow. That causes very interesting behavior, you can talk to it and it will respond as if you are having a conversation.

  • I mean plain old autocorrect does a surprisingly good job. Here's a quick example, I'll only be tapping the middle suggested word. I will be there for you to grasp since you think your instance is screwy. I think everybody can agree that sentence is a bit weird but an LLM has a comparable understanding of its output as the autocorrect/word suggestion did.

    A conversation by definition is at least two sided. You can't have a conversation with a tree or a brick but you could have one with another person. A LLM is not capable of thought. It "converses" by a more advanced version of what your phones autocorrect does when it gives you a suggested word. If you think of that as conversation I find that an extremely lonely definition of the word.

    So to me yes, it does matter

  • Thanks! Lot of people don't seem to realize that GPT doesn't actually have any idea what words mean. It just outputs stuff based on how likely it is to show up after the previous stuff. This results in very interesting behavior but there's nothing conceptually "there", there's no thinking and so, no conversation to be had.

  • Can't say that I struggled to understand pointers but if GPT helped you conceptualize em that's good. I really don't see much utility in even the current iterations of these LLMs. Take copilot for example, ultimately all it actually helps with is boilerplate which if you are writing enough for it to be meaningfully helpful you can have a fancy IDE live template or just a plain old snippet.

    Theres a lot of interesting things it could be doing like checking if my documentation is correct or the like but all it does is shit I could do myself with less hassle.

    There's also the whole issue of LLMs having no concept of anything. You aren't having a conversation, it just spits out the words it thinks are most likely to occur in the given context. That can be helpful for extremely generic questions it's been trained on thanks to Stack Overflow but GPT doesn't actually know the right answer. It's like really fancy autocorrect based on the current context. What this means is you absolutely cannot trust anything it says unless you know enough about the topic to determine what it outputs is accurate.

    To draw a comparison to written language (hopefully you don't know Japanese) is 私 or 僕 "I"? Can you confidently rely on auto correct to pick the right one? Probably not cause the first one わたし (watashi) is "I" and the second ぼく (boku) is also "I" (more boyish). Trusting an LLMs output without being able to ensure it's accuracy is like trusting auto correct to use the right word in a language you don't know. Sure it'll work out fine generally but when it fails you don't have the knowledge to even notice.

    Because of these failings I don't see much utility in LLMs especially seeing as the current obsession is chat apps geared at the general public to fool around with.

  • Alternatively as both floats (32 bit) and doubles (64 bit) are represented in binary we can directly compare them to the possible values an int (32 bit) and a long (64 bit) has. That is to say a float has the same amount of possible values as an int does (and double has the same amount of values as a long) . That's quite a lot of values but still ultimately limited.

    Since we generally use decimal numbers that look like this 1.5 or 3.14. It's setup so the values are clustered around 0 and then every power of 2 you have half as many meaning you have high precision around zero (what you use and care about in practice) and less precision as you move towards negative infinity and positive infinity.

    In essence it's a fancy fraction that is most precise when it's representing a small value and less precise as the value gets farther from zero