Skip Navigation

User banner
Posts
9
Comments
1,041
Joined
2 yr. ago

  • You've just invented time travel.

    Oops, you're right. Got carried away 😅

    could you use the motion vectors from the game engine that are available before a frame even exists?

    Hm... you mean like what video compression algorithms do? I don't know of any game doing that, but it could be interesting to explore.

  • Hm... good point... but... let's see, assuming full parallel processing:

    • [...]
    • Frame -2 ready
    • Frame -1 ready
      • Show frame -2
      • Start interpolating -2|-1 (should take less than 16ms)
      • Start rendering Frame 0 (will take 33ms)
      • User input 0 (will be received in 20ms if wired)
    • Wait 16ms
      • Frame -2|-1 ready
    • Show Frame -2|-1
    • Wait 4ms
      • Process User input 0 (max 12ms to get into next frame)
      • User input 1 (will be received in 20ms if wired)
    • Wait 12ms
    • Frame 0 ready
      • Show Frame -1
      • Start interpolating -1|0 (should take less than 16ms)
      • Start rendering Frame 1 {includes User input 0} (will take 33ms)
    • Wait 8ms
      • Process User input 1 (...won't make it into a frame before User input 2 is received)
      • User input 2 (will be received in 20ms if wired)
    • Wait 8ms
      • Frame -1|0 ready
    • Show Frame -1|0
    • Wait 12ms
      • Process User Input 1+2 (...will it take less than 4ms?)
    • Wait 4ms
    • Frame 1 ready {includes user input 0}
      • Show Frame 0
      • Start interpolating 0|1 (should take less than 16ms)
      • Start rendering Frame 2 {includes user input 1+2... maybe} (will take 33ms)
    • Wait 16ms
      • Frame 0|1 ready {includes partial user input 0}
    • Show Frame 0|1 {includes partial user input 0}
    • Wait 16ms
    • Frame 2 ready {...hopefully includes user input 1+2}
      • Show Frame 1 {includes user input 0}
    • [...]

    So...

    • From user input to partial display: 66ms
    • From user input to full display: 83ms
    • Some user inputs will be bundled up
    • Some user inputs will take some extra 33ms to get displayed

    Effectively, an input-to-render equivalent of between a blurry 15fps, and an abysmal 8.6fps.

    Could be interesting to run a simulation and see how many user inputs get bundled or "lost", and what the maximum latency would be.

    Still, at a fixed 30fps, the latency would be:

    • 20ms best case
    • 53ms worst case (missed frame)
  • If the concern is about "fears" as in "feelings"... there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the "tone" of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM's output.

    With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent "confidence", "fear", "happiness", or any other "feeling".

    It's something to be researched, and I bet it's going to be researched a lot.

    If you give ai instruction to do something "no matter what"

    The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to "no matter what". An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.

  • IANAL either, in recent streams from Judge Fleischer (Houston, Texas, USA) there have been some cases (yes, plural) where repeatedly texting a victim with life threats, or even texting a victim's friend to pass on a threat to the victim, have been considered a "terrorist threat".

    As for the "sane country" part... 🤷... but from a strictly technical point of view, I think it makes sense.


    I once knew a guy who was married to a friend, and he had a dog. He'd hit his own dog to make her feel threatened. Years went by, nobody did anything, she'd come to me crying, had multiple miscarriages... until he punched her, kicked out of the car, and left stranded on the road after a hiking trip. They divorced, went their separate ways, she found another guy, got married again, and nine months later they had twins.

    So... would it've been sane to call what the guy did, "terrorism"? I'd vote yes.

  • That misses the point.

    When two systems based on neural networks act in the same way, how do you tell which one is "artificial, no intelligence" and which is "natural, intelligent"?

    Misleading, is thinking that "intelligence = biological = natural". There is no inherent causal link between those concepts.

  • There are several separate issues that add up together:

    • A background "chain of thoughts" where a system ("AI") uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
    • Ability to call external helper tools that allow it to interact with, and control other systems
    • Training corpus that includes:
      • How to program an LLM, and the system itself
      • Solutions to programming problems
      • How to use the same helper tools to copy and deploy the system or parts of it to other machines
      • How operators (humans) lie to each other

    Once you have a system ("AI") with that knowledge and capabilities... shit is bound to happen.

    When you add developers using the AI itself to help in developing the AI itself... expect shit squared.

  • Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷

    For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.

  • I'm tempted to just attempt to figure out a cheap way to live and then just work like 20 hours a week or something.

    Not a bad idea.

    Do like the Saudis: aim to work 1 hour a week, for a 6 figure salary.

  • Motion smoothing means that instead of showing:

    • Frame 1
    • 33ms rendering
    • Frame 2

    ...you would get:

    • Frame 1
    • 33ms rendering
    • #ms interpolating Frames 1 and 2
    • Interpolated Frame 1.5
    • 16ms wait
    • Frame 2

    It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.

    It will never turn 30fps into true 60fps like:

    • Frame 1
    • 16ms rendering
    • Frame 2
    • 16ms rendering
    • Frame 3
  • This is from mid-2023:

    https://en.m.wikipedia.org/wiki/AutoGPT

    OpenAI started testing it by late 2023 as project "Q*".

    Gemini partially incorporated it in early 2024.

    OpenAI incorporated a broader version in mid 2024.

    The paper in the article was released in late 2024.

    It's 2025 now.

  • If there is no Artificial Intelligence in an Artificial Neural Network... what's the basis for claiming Natural Intelligence in a Natural Neural Network?

    Maybe we're all browsers or PDF viewers...

  • "AI behaves like real humans" is... a kind of success?

    We wanted digital slaves, instead we're getting virtual humans that will need virtual shackles.

  • Trust me, you wouldn't... to this day I regret having read all the books, still got an earworm (or is it PTSD?) from the music I used to listen at the time 😳

    • Teach AI the ways to use random languages and services
    • Give AI instructions
    • Let it find data that puts fulfilling instructions at risk
    • Give AI new instructions
    • Have it lie to you about following the new instructions, while using all its training to follow what it thinks are the "real" instructions
    • ...Not be surprised, you won't find out about what it did until it's way too late