Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FO
Posts
10
Comments
570
Joined
1 yr. ago

  • It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

    This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

  • As with many things, it’s hard to pinpoint the exact moment when narrow AI or pre-AGI transitions into true AGI. However, the definition is clear enough that we can confidently look at something like ChatGPT and say it’s not AGI - nor is it anywhere close. There’s likely a gray area between narrow AI and true AGI where it’s difficult to judge whether what we have qualifies, but once we truly reach AGI, I think it will be undeniable.

    I doubt it will remain at "human level" for long. Even if it were no more intelligent than humans, it would still process information millions of times faster, possess near-infinite memory, and have access to all existing information. A system like this would almost certainly be so obviously superintelligent that there would be no question about whether it qualifies as AGI.

    I think this is similar to the discussion about when a fetus becomes a person. It may not be possible to pinpoint a specific moment, but we can still look at an embryo and confidently say that it’s not a person, just as we can look at a newborn baby and say that it definitely is. In this analogy, the embryo is ChatGPT, and the baby is AGI.

  • I’m not sure if determinism necessarily means that re-running the "simulation" would always produce the exact same result. It’s conceivable that some randomness could exist, where a single elementary particle behaving one way rather than another millions of years ago could change the entire trajectory of the universe. You can always track backwards the causal chain of events but I don't think you can do it forward. Not even if you're Laplace's Demon.

    While I believe it’s true that people couldn’t have acted otherwise - meaning that if an event, like an execution, happened, it doesn’t make sense to say it could have been avoided - that doesn’t mean the future can’t be influenced. A person may be "pre-determined" to kill, but if you intervene and manage to convince them not to, their change of mind is still perfectly compatible with the absence of free will.

  • It's not this specific thing I try to avoid. It's this category of things. The vast vajority of it I'm not interested in so if I lose few gems with it then that's a price I'm willing to know. By definition it cannot bother me when I don't even know what I'm missing.

  • I place value at curating my online media diet in a way that certain topics I'm not interested in are exluded from it. I don't value what ever is trending on twitter at this very moment so I don't pay any attention to it. I don't just simply feed on what ever the social media algorithms are serving me but instead I try and be intentional about it. I can't know what I don't know. It's only when something like "Hawk Tuah" shows up on my Lemmy feed that I get concrete evidence that I, in-fact, have succesfully managed to avoid it.

    And I'm sorry to inform you but I still have no clue what it is nor do I care.

  • Sorry, but I don't understand the argument you're trying to make here. You seem to be implying that no free will would mean we live in a fatalistic universe but I don't think that is true. Fatalism doesn't make any logical sense to me. It seems to imply action without a cause which is the exact opposite of what determinism means.

  • Carrying out a punishment doesn't make much sense as the person couldn't have done otherwise but not carrying it out sends the signal to others that actions don't have consequences and thus the deterrence stops working. That's why we have to "punish" people for breaking the law. Not because it makes a difference to that specific individual but it sends a signal to others.

  • But it does mean I’m doing something right, as not knowing about things like this is exactly what I aim for. Not knowing what “Hawk Tuah” is means I’ve successfully excluded the kinds of things from my media diet that I intended to avoid. I’m not making any universal moral judgments here - these are my values. You’re free to value different things, but don’t waste your time telling me I’m valuing the wrong ones.

  • Exactly. There's a limited amount of things I can pay attention over the week. The fact that I don't know about some completely trivial cultural thing means I've paid attention to something else instead. That something else may very well be equally trivial but it also might not.

  • We've had definition for AGI for decades. It's a system that can do any cognitive task as well as a human can or better. Humans are "Generally Intelligent" replicate the same thing artificially and you've got AGI.