Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)AU
Posts
1
Comments
363
Joined
6 mo. ago

  • That take only works if you ignore how visual perception actually works. White and gold viewers aren’t wrong—they’re seeing the same pixel values as everyone else, but their brains interpret the lighting differently. The photo has no clear cues about illumination, so the brain fills in the blanks. Some people assume shadow or cool lighting and perceive the colors as lighter, others assume warm light and see them as darker. Both are valid perceptual outcomes given the ambiguity. But here’s the kicker: the actual pixel values in the image are pale blue and a brownish gold. So in terms of what’s literally in the image, white and gold viewers are actually closer to the raw data, regardless of what color the physical dress is in real life. The idea that black and blue people are just “right” misses that distinction completely. What’s especially funny is how often that group doubles down like they’ve uncovered some grand truth, when in reality, they’re just less able—or less willing—to grasp that perception isn’t about facts, it’s about interpretation. It’s like watching someone shout that a painting is wrong because it’s not a photograph.

  • The claim mixes up how perception works and what people actually mean when they talk about top-down processing. White and gold viewers aren’t saying the pixels are literally white and gold—they’re saying the colors they perceive match most closely with that label, especially when those were the only options given. Many of them describe seeing pale blue and brown, which are the actual pixel values. That’s not bottom-up processing in the strict sense, because even that perception is shaped by how the brain interprets the image based on assumed lighting. You don’t just see wavelengths—you see surfaces under conditions your brain is constantly estimating. The dress image is ambiguous, so different people lock into different lighting models early in the process, and that influences what the colors look like. The snake example doesn’t hold up either. If the lighting changes and your perception doesn’t adjust, that’s when you’re more likely to get the snake’s color wrong. Contextual correction helps you survive, it doesn’t kill you. As for the brain scan data, higher activity in certain areas means more cognitive involvement, not necessarily error. There’s no evidence those areas were just shutting things down. The image is unstable, people resolve it differently, and that difference shows up in brain activity.

  • What science lol.

    The pixels are light blue and gold.

    The dress itself is dark blue and black.

    But the pixels side with the white and gold team. They are seeing the pixels as they appear. If you see blue and black your subconscious is over-riding the objective reality of the pixels (and guessing correctly what colours the original dress is).

  • fMRI studies show that white-and-gold perceivers exhibit more activity in frontal and parietal brain regions, suggesting that their interpretation involves more top-down processing. This means they are more, not less, engaged in contextual interpretation.

    Some differences may relate to physiological traits like macular pigment density, which affects how much blue light is absorbed before reaching the retina. People with higher density tend to see white and gold

    Color perception is not only about the visual cortex’s function but about the image’s properties and the brain’s inferential processes. You’d know this if you weren’t a dumb blue-n-black’er

  • Yes a very light blue, nobody is seeing brilliant white. But on a colour slider it’s much closer to white than the ‘true’ dark blue of the dress. If you sample the sleeve or whatever that is hanging over it’ll be even closer to pure white.

  • It’s subconscious it’s not something you can learn. If that were the case people would have no issue understanding how others weren’t ‘decrypting’ the photo.

    Also the majority see it as blue and black. 30% as white and gold.

    The Journal of Vision, a scientific journal about vision research, announced in March 2015 that a special issue about the dress would be published with the title A Dress Rehearsal for Vision Science.

    The first large-scale scientific study on the dress was published in Current Biology three months after the image went viral. The study, which involved 1,400 respondents, found that 57 per cent saw the dress as blue and black, 30 per cent saw it as white and gold, 11 per cent saw it as blue and brown, and two per cent reported it as "other". Women and older people disproportionately saw the dress as white and gold. The researchers further found that, if the dress was shown in artificial yellow-coloured lighting, almost all respondents saw the dress as black and blue, while they saw it as white and gold if the simulated lighting had a blue bias.

    Another study in the Journal of Vision, by Pascal Wallisch, found that people who were early risers were more likely to think the dress was lit by natural light, perceiving it as white and gold, and that "night owls" saw the dress as blue and black.

    A study carried out by Schlaffke et al. reported that individuals who saw the dress as white and gold showed increased activity in the frontal and parietal regions of the brain. These areas are thought to be critical in higher cognition activities such as top-down modulation in visual perception

  • That we’re curious problem solvers?

    Anyway, science has determined that my way is most based

    A study carried out by Schlaffke et al. reported that individuals who saw the dress as white and gold showed increased activity in the frontal and parietal regions of the brain. These areas are thought to be critical in higher cognition activities such as top-down modulation in visual perception

  • You can sample the colours and see it’s white with a very light blue tinge and gold.

    People who see it as blue and black are (correctly in this case) auto-correcting for the yellow light as the dress itself is black and blue.

    Whereas people who see it as white and gold are (subconsciously) assuming a blue shadow and seeing the pixels as they’re displayed.

  • Yes, LLM inference consists of deterministic matrix multiplications applied to the current context. But that simplicity in operations does not make it equivalent to a Markov chain. The definition of a Markov process requires that the next output depends only on the current state. You’re assuming that the LLM’s “state” is its current context window. But in an LLM, this “state” is not discrete. It is a structured, deeply encoded set of vectors shaped by non-linear transformations across layers. The state is not just the visible tokens—it is the full set of learned representations computed from them.

    A Markov chain transitions between discrete, enumerable states with fixed transition probabilities. LLMs instead apply a learned function over a high-dimensional, continuous input space, producing outputs by computing context-sensitive interactions. These interactions allow generalization and compositionality, not just selection among known paths.

    The fact that inference uses fixed weights does not mean it reduces to a transition table. The output is computed by composing multiple learned projections, attention mechanisms, and feedforward layers that operate in ways no Markov chain ever has. You can’t describe an attention head with a transition matrix. You can’t reduce positional encoding or attention-weighted context mixing into state transitions. These are structured transformations, not symbolic transitions.

    You can describe any deterministic process as a function, but not all deterministic functions are Markovian. What makes a process Markov is not just forgetting prior history. It is having a fixed, memoryless probabilistic structure where transitions depend only on a defined discrete state. LLMs don’t transition between states in this sense. They recompute probability distributions from scratch each step, based on context-rich, continuous-valued encodings. That is not a Markov process. It’s a stateless function approximator conditioned on a window, built to generalize across unseen input patterns.

  • You can say that the whole system is deterministic and finite, so you could record every input-output pair. But you could do that for any program. That doesn't make every deterministic function a Markov process. It just means it is representable in a finite way. The question is not whether the function can be stored. The question is whether its behavior matches the structure and assumptions of a Markov model. In the case of LLMs, it does not.

    Inference does not become a Markov chain simply because it returns a distribution based on current input. It becomes a sequence of deep functional computations where attention mechanisms simulate hierarchical, relational, and positional understanding of language. That does not align with the definition or behavior of a Markov model, even if both map a state to a probability distribution. The structure of the computation, not just the input-output determinism, is what matters.