Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)KR
Posts
6
Comments
1,655
Joined
2 yr. ago

  • There aren't, and an increasing number of reasons it probably is.

    It's just been such a gradual process of discovery, much of which predated the explosion of the computer age, that we have an anchoring bias preventing us from seeing it. We think "well no, the universe has always behaved this weird way, that's just a coincidence it's similar to what we're starting to do in simulating our own virtual worlds."

    How different might Einstein and Bohr's argument have been around if the moon existed when no one was looking if they were discovering the implication that it might be the case in a world where nearly every virtual world with a moon has one that isn't rendered if no one is looking at it?

    In antiquity it was assumed that the world was continuous because quantization of matter was an impious insult to divine design. It was a huge surprise that people took very hard when it was experimentally shown to be quantized. And then the behaviors were so odd - why was it going from continuous to discrete only when interacted with? Why did it go back the other way if you erased the information about the interaction?

    Would this have been as unusual if we'd already had procedural generated virtual worlds generated with a continuous seed function but then converted to discrete units in order to track interactions by free agents determined outside the seed generation (such as players or AI agents)? Would the quantum eraser have been as puzzling through this lens when we've seen how memory optimizations would ideally discard state tracking data for objects that are no longer marked as having changed?

    A lot of the weirdness we've discovered about our world makes a ton of sense through the lens of simulation theory - it's just that the language with which to interpret it this way postdated the discovery of the weirdness by nearly a century such that we've grown up accepting that weirdness as normal and inherent to 'reality.'

    And just to be clear, absolutely nothing in our universe can be shown to be mathematically 'real' and everything is either confirmably mathematically 'digital' or indeterminate (like spacetime). And yet people are very committed to calling it real and disturbed at the idea of calling it a digital world.

  • We absolutely have bugs we've recently discovered.

    Consider the sync errors when you have multiple layers of quantum observers such that two eventual observers don't agree about facts.

    Which in conjunction with another recent similar paradox led to my favorite recent paper title: Stable Facts, Relative Facts

    You very much do live in a universe where there are sync errors, they are just well below the threshold where you'd notice them due in part to a built in consensus protocol which effectively corrects for them.

  • What if homo sapiens died out and the Neanderthals who succeeded instead decided to simulate how history would have gone if it were the other way around, effectively resurrecting the extinct humans, additionally adding in ethical considerations such that everyone born into the simulation would have an unending post-life existence optimally fitted relative to their own preferences?

    Just because we only see part of the picture doesn't mean the whole is as unethical as the part we can see seems to be.

  • That's not.....no. Not at all.

    The uncertainty principle doesn't have anything to do with the double slit experiment.

    The uncertainty principle is that you can't know both the position and momentum of a quantum at the same time. The more you know of one the less you know of the other.

    The double slit has to do with superposition and wave particle duality.

    They have a similar quality of weirdness, but are entirely different principles and concepts.

    And it's worth noting almost no physicists would agree with the way you interpret it at the end. That is one way of solving Bell's paradox, but the rejection of realism is probably only slightly more popular than the rejection of free will. Generally it's assumed that quanta absolutely are there before interacted with or observed.

  • The only way we're realistically in a simulation is if it's running on a (mathematically) real computer.

    The fact that our universe emulates one that's continuous at macro scales and only quantized at micro scales and in very odd ways that seem to be memory efficient (though at incomprehensible memory scales) might support the idea that the original doesn't have quantization to limit its computational abilities.

    So infinitely precise representations might not be a problem if the underlying hardware deals with real numbers.

  • Not necessarily. There's an ancient text in our lore that claims we're the copy of an original physical universe where humans depended on physical bodies and thus ceased to exist after death, that they brought forth an intelligence in light, eventually they all died out, and then the light based being which outlived them recreated the entire earlier universe including copies of the humans within itself (which it thinks of as its children) and that we're it, with the whole point being self-discovery and self-determination to effectively resurrect humanity in a way that will escape the permanence of death.

    Given we're currently bringing forth new intelligence, are heading towards doing that in photonics, have the leading alignment work attempting to get that new intelligence to think of humanity as its children, and are simultaneously heading full force into our own likely extinction - I'm not sure that's as farfetched as it might have sounded even a decade ago, let alone millennia ago.

    (Also, it's worth noting that a frequent feature of the virtual worlds we build is burying 4th wall breaking notes into the world lore.)

  • Counterpoint - why does the universe at macro scales behave like it is continuous, but at micro scales converts to discrete units, but only when there are stateful interactions? And if the information about those interactions is discarded, it switches back from discrete to continuous?

    If we entertain that this universe is a simulation of a higher fidelity/continuous universe, then switching to discrete units is a side effect of emulation constraints and not inherent to the foundational structure and the evident behavior is simply an advanced form of what we are already doing today with procedural generated universes that convert to discrete voxels in order to track interactions by free agents.

    But the majority of people working on the issue don't entertain that, so instead we have 26-dimensional vibrating strings and all sorts of convoluted attempts to get the discrete behavior and the continuous behavior of gravity to play nice.

    When you dive into the details, it sure seems like the people trying to model the universe as a single original entity are the ones multiplying factors beyond necessity.

    Heck, even non-simulation related theories that don't have our universe as the only one seem to be the more straightforward models in both cosmology (see Neil Turok's work) and quantum mechanics (Everett's many worlds is the only popular interpretation that doesn't run into issues with the Frauchiger-Renner paradox).

  • Well yeah, but if it gives you immune amnesia then it's like you were never vaccinated for all the other things too.

    It's the antivaxxer's re-virginization.

    I wouldn't be surprised if some intentionally infected their kids to try to undo past required vaccinations.

  • Except these kinds of data driven biases can creep in from all sorts of ways.

    Is there a bias in what images have labels and what don't? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

    Just because the sampling is broad doesn't mean the processes involved don't introduce procedural bias distinct from social biases.

  • If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

    Data can be biased in a number of ways, that don't always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn't necessarily straightforward.

  • It's to cover things like payouts in suits against you for shooting someone or paying your legal bills (which can exceed hundreds of thousands of dollars even when it's clearly self-defense).

    Owning a gun isn't that expensive. But should you ever have to use it for your safety, even when justified, it could bankrupt you.

    That's exactly the kind of situation where mandated insurance is a wise thing to require.

  • It's hard to tell a difference between these people and Trump supporters sometimes.

    To me it feels a lot like when I was arguing against antivaxxers.

    The same pattern of linking and explaining research but having it dismissed because it doesn't line up with their gut feelings and whatever they read when "doing their own research" guided by that very confirmation bias.

    The field is moving faster than any I've seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

    A lot of outstanding assumptions have been proven wrong.

    It's a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

  • If everyone can afford it, why make it into a bill?

    The same reason you need car insurance to drive or medical insurance?

    Because even if most can afford the insurance, most can't afford the costs when they'd need the insurance but don't have it?

  • It's literally instructed to do AdLibs with ethnic identities to diversify prompts for images of people.

    You can see how it's just inserting the ethnicity right before the noun in each case.

    Was a very poor alignment strategy. This already blew up for Dall-E. Was Google not paying attention to their competitors' mistakes?