Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FH
frightful_hobgoblin @ frightful_hobgoblin @lemmy.ml
Posts
50
Comments
872
Joined
2 yr. ago

  • when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning

    Can you paste an example of this error?

  • Like if I go to Journal of Fusion Energyhttps://link.springer.com/journal/10894 – the latest article is titled 'Artificial Neural Network-Based Tomography Reconstruction of Plasma Radiation Distribution at GOLEM Tokamak' and the 4th-latest is 'Deep Learning Based Surrogate Model a fast Soft X-ray (SXR) Tomography on HL-2 a Tokamak'. I am sorry if that upsets you but that's the way the field is.

  • This thread is funny. A few users are like "😡😡😡I hate everything about AI😡😡😡" and also "😲😲😲AI is used for technical research??? 😲😲😲 This is news to me! 😲😲😲"

    Talk about no-investigation-no-right-to-speak. How can you have an opinion on a field without even knowing roughly what the field is?

  • But it's inherently impossible to "show" anything except inputs&outputs (including for a biological system).

    What are you using the word "real" to mean, and is it aloof from the measurable behaviour of the system?

    You seem to be using a mental model that there's

    • A: the measurable inputs & outputs of the system
    • B: the "real understanding", which is separate

    How can you prove B exists if it's not measurable? You say there is an "onus" to do so. I don't agree that such an onus exists.

    This is exactly the Chinese Room paper. 'Understand' is usually understood in a functionalist way.