Skip Navigation

Posts
11
Comments
474
Joined
2 yr. ago

  • That we rationalized taking away children, in their most important phase of development, from the most important entity for this phase, is the prime example for how we failed as the mankind.

  • Ask him to add "This is interesting!" followed by "please".

    If he won't, repeat the request 3 times, then announce that you're going to pretend he didn't say a thing and refuse to move.

  • Vagrus - The Riven Realms.

    Essentially it's Oregon Trail game in fantasy world. But effectively, it consists of many different genres. There's caravan management, choice making, strategy, tactical combat, profit making, cRPG... It's highly complicated, challenging but also very fun.

  • I know how it went.

    If people are ok with such a treatment, they surely won't mind me calling them "spineless cowards".

  • Everyone is always equally drunk, and the road is always the same

    Guys, an unpopular opinion, but perhaps you shouldn't take too long showers, eh?

  • People are spineless pussies, that'd rather bend over to a devil, than let go their convenience and select an alternative.

    More news at 5.

  • Yes. They can write code.

    You don't seem to understand me, or are trying very hard to not understand me.

    I'll try again, but if it fails, I'll assume it's "bring horse to the water" case.

    So: can AIs write their own code? As in "rewrite the code that is them"? Not write some small pieces of code, a small app, but can their write THEIR OWN code, the one that makes them run?

    And my point is that neural networks don’t require understanding of whatever they’re trained on.

    Your point does not address my argument.

    You can't compare a thing to a thing you neither understand nor can predict its capabilities.

  • ZXSpectrum

    People there are so happy when they see a game that reminds them of their past. It's very heartwarming to see someone commenting "man, I remember this game, played it when I was young and happier".

  • Rather than be infuriated, block the site that wrote such a nonsense. Consider writing a relevant email first, where you explain your position and ask friends to join you in your boycott.

    The world is like it is, because we forgot that there are consequences to our actions and claims.

    As usual, I blame exTwitter and Facebook.

  • 90% of the internet has always had ads

    No, it didn't, kid.

  • Yes. That’s what training is.

    I'm not talking about building a database of data harvested from external sources. I'm not talking about the designs they make.

    I'm asking whether AIs are able and allowed to modify THEIR OWN code.

    We know they follow the laws of physics, which are turing complete.

    Scientists are continuously baffled by the universe - very physical thing - and things they discover there. The point is that the knowledge that a thing follows certain specific laws does not give us the understanding of it and the mastery over it.

    We do not know the full extent of what our brains are capable of. We do not even know where "the full extent" may end. Therefore we can't say that AIs are capable to do what our brains can, even if the underlying principle seem "basic" and "straightforward".

    It's like comparing a calculator to a supercomputer and claiming the former can do what the latter does, because "it's all 0s and 1s, man". 😉

  • I agree with the basic idea, but there’s not some fundamental distinction between what we have now and true AI.

    Are AIs we have at our disposal able and allowed to self-improve on their own? As in: can they modify their own internal procedures and possibly reshape their own code to better themselves, thus becoming more than their creators predicted them to be?

    There’s nothing the human brain can do that they can’t, so with enough resources they can imitate the human brain.

    Human brain can:

    • interfere with any of its "hardware" and break it
    • go insane
    • preocupy itself with absolutely pointless stuff
    • create for the sake of creation itself
    • develop and upkeep illusions it will begin to trust to be real
    • choose ad act against undeniable proof given to it

    These are of course tongue-in-cheek examples of what a human brain can, but - from the persepctive of neuroscience, psychology and a few adjacent fields of study - it is absolutely incorrect to say that AIs can do what a human brain can, because we're still not sure how our brains work, and what they are capable of.

    Based on some dramatic articles we see in news that promise us "trauma erasing pills", or "new breakthrough in healing Alzheimer" we may tend to believe that we know what this funny blob in our heads is capable of, and that we have but a few small secrets to uncover, but the fact is, that we can't even be sure just how much is there to discover.

  • The AIs we have at our disposal can't invent a thing - yet - because they aren't true AIs - again: yet.

    They are merely, and should be perceived as tools, nothing more. It's the people who use them that may apply them to tasks that will result in invention, but on their own, they are closer to the Chinese Room principle, than to thinking and inventive constructions.

  • Mind can neither break nor bend. It's a figure of speech, dummy.

  • Same. We have long blogpost here, we have questions, polls... Wtf?

  • The entire showerthought must be in the title

  • I can't provide precise answer, since some services rely on HDD performance, while others enjoy big amount of RAM.

    Personally, RAM and reliability are two things I'm after when entertaining the idea of a home server.

    For example: I'm about to build a very simple file server + jellyfin + printserver + RDP rig and it's going to be based on DELL 5040 + 8Gb RAM + 4Tb SATA, running... Windows 10 Pro. 🤠

  • It doesn't go away.

    It's simply not as alluring as it was before, when combat footage, or recordings from yet another mass shooting appear everywhere, hardly any censored.