Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MO
Posts
2
Comments
260
Joined
1 yr. ago

  • If I move something close enough to my face it appears in view twice seemingly semi-transparent

    That sounds like what I experience, not just for things very close to my face, whenever my eyes are aligned to something in front or behind.

    But in order to do the dominant eye test, you need to only see one image in the foreground and background simultaneously. So how does that happen unless the view from one eye is at least partially supressed?

    This is one of those things that's really hard to talk about and describe, but I would love to actually understand it. Also no, I can't notice my blind spots.

  • I've tried it, but there isn't any traffic data, so it's really not usable for me. Also the search isn't great either, and there's no lane indicator.

    Don't get me wrong, I think OSM and Organic are great projects, they just don't really compare to other options for my use.

  • Honestly I might consider giving this a try. I really don't want Google to have my location at all times, and I trust Apple at least a bit more with my data. Currently using Magic Earth, but the traffic info and search, while usable, are not great.

  • Not at all, I perceive depth fine.

    If I focus back on my hand, the two images align, and I see both images of the background. It's just that I'm always seeing information from both eyes.

    If anything, from my perspective it's everyone else who I would expect to have difficulties with depth perception. You're only perceiving one eye consciously, (In the binocular overlap region), and the other eye is just used for depth information by your subconscious, is that correct?

  • That's interesting, for most people the brain just substitutes in the image of where your eye moves to, so it feels instantaneous. (there's no noticeable blindness) But you can see throughout the full movement?

    In a similar vein, I never understood having a "dominant eye". I honestly don't really understand the concept, I guess most people's brains will cancel out information from one eye?

  • Many countries including the US use 12 hour time for everything, so it's easier for a lot of people to not have to constantly translate. So it makes sense to be the default in those countries. And yes, I think 24hr should be standard everywhere, but it's not. I also think it's insane not to use SI units, but oh well. (I think we should use decimal time as well, but that's never going to happen because we'd need to redefine so many units.)

  • Well, it falls apart pretty easily. LLMs are notoriously bad at math. And even if it was accurate consistently, it's not exactly efficient, when a calculator from the 80s can do the same thing.

    We have setups where LLMs can call external functions, but I think it would be cool and useful to be able to replace certain internal processes.

    As a side note though, while I don't think that it's a "true" thought process, I do think there's a lot of similarity with LLMs and the human subconscious. A lot of LLM behaviour reminds me of split brain patients.

    And as for the math aspect, it does seem like it does math very similarly to us. Studies show that we think of small numbers as discrete quantities, but big numbers in terms of relative size, which seems like exactly what this model is doing.

    I just don't think it's a particularly good way of doing mental math. Natural intuition in humans and gradient descent in LLMs both seem to create layered heuristics that can become pretty much arbitrarily complex, but it still makes more sense to follow an exact algorithm for some things.

  • I considered this, and I think it depends mostly on ownership and means of production.

    Even in the scenario where everyone has access to superhuman models, that would still lead to labor being devalued. When combined with robotics and other forms of automation, the capitalist class will no longer need workers, and large parts of the economy would disappear. That would create a two tiered society, where those with resources become incredibly wealthy and powerful, and those without have no ability to do much of anything, and would likely revert to an agricultural society (assuming access to land), or just propped up with something like UBI.

    Basically, I don't see how it would lead to any form of communism on its own. It would still require a revolution. That being said, I do think AGI could absolutely be a pillar of a post capitalist utopia, I just don't think it will do much to get us there.

  • Depends on what we mean by "AI".

    Machine learning? It's already had a huge effect, drug discovery alone is transformative.

    LLMs and the like? Yeah I'm not sure how positive these are. I don't think they've actually been all that impactful so far.

    Once we have true machine intelligence, then we have the potential for great improvements in daily life and society, but that entirely depends on how it will be used.

    It could be a bridge to post-scarcity, but under capitalism it's much more likely it will erode the working class further and exacerbate inequality.

  • Well I remember seeing a study that ads actually have a pretty bad return on money on average, so the problem is that selling ads is quite profitable. Platforms drown us in ads because it makes them money, and it doesn't matter if the ads themselves are effective or not.