Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TH
Posts
3
Comments
553
Joined
1 yr. ago

[deleted]

Jump
  • Wrong, that's the opposite of how reasonable doubt works. It is the prosecutor's job to prove beyond doubt that the defendent is guilty of the charges. The defendent does not need to prove they are innocent.

    If the prosecutor can't prove that the defendent is lying about the alibi, then they've failed at their job.

  • If it can power up and decrypt the docker volumes on its own without prompting you for a password in your basement, it will also power up and decrypt the docker volumes on its own without prompting the robbers for a password in their basement

  • Run on a treadmill and lift some weights?

    Yes, that is exactly what you do at a gym.

    I feel like I could do all of that at home. Gym memberships are insanely expensive.

    Absolutely correct.

    Are home workouts actually effective?

    Yes.

    Does one even enjoy gym time?

    Yes.

  • There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

    Incorrect. You might want to take an information theory class before speaking on subjects like this.

    I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago.

    Lmao yup totally, it's not like this type of research currently gets huge funding at universities and institutions or anything like that 😂 it's a dead research field because it's already "settled". (You're wrong 🤭)

    LLMs are just tools not sentient or verging on sentient

    Correct. No one claimed they are "sentient" (you actually mean "sapient", not "sentient", but it's fine because people commonly mix these terms up. Sentience is about the physical senses. If you can respond to stimuli from your environment, you're sentient, if you can "I think, therefore I am", you're sapient). And no, LLMs are not sapient either, and sapience has nothing to do with neural networks' ability to mathematically reason or use logic, you're just moving the goalpost. But at least you moved it far enough to be actually correct?