Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TO
Posts
7
Comments
348
Joined
2 yr. ago

  • Thanks!

    We had to learn altruism to survive the neglect, so our parents accidentally trained us not to pull up the ladder like the Boomers. Lord of the Flies debunked.

    To be honest the neglect was better than when they did pay attention, for all we complain about it. No parenting was better than shitty parenting.

  • My pleasure, I’m always down for nerding out on stuff like this.

    Another fun example of the early visual processing is feature detection. The parallel processing allows us to instantly find a green square in a sea of red squares, as it jumps out at you.

    But when you combine multiple independent features together (find the green square in a sea of red and green squares and circles) now we have to tediously look around the whole image. That integration of multiple features forces the work higher up in the visual system and takes more time, attention, and effort. Thats why Where’s Waldo is hard.

    https://i.imgur.com/2UZhT3I.jpeg

  • Psychophysics is the study of the relationship between stimuli and perception. A lot of it is perceptual thresholds, so how much sound or light is needed for someone to hear or see something (fun fact: people can reliably detect a single photon in a dark room at better than chance). Thus much of the experimental protocols boil down to asking “can you hear this?” after playing a sound which is what the Verizon ads always made me think of.

    The other half of the discipline is figuring out the wiring on low level perception. For instance we have massive parallel processing in the early visual system that does things like highlight lines and edges, which is what makes the Mach dot/band illusions work:

    https://en.wikipedia.org/wiki/Mach_bands

    It’s what I studied in undergrad as it was a nice overlap in my interests in cognitive science, psychology and computer science. I basically just like knowing how everything works :)

    More info:

    https://en.wikipedia.org/wiki/Psychophysics

  • Slashdot had an interesting variation on voting. People would be randomly assigned mod points to give to posts they like. It didn’t happen every day but when it did you spent those points and then it was other people’s turn the next day. Then they had a similar meta-moderation system on top of that so that misused points or trolling could get corrected. I’ve not seen it’s like since.

  • I knew a very smart manager who quit smoking but still used to go hang out in the smoking area just to stay in touch with everything. I’ve learn more in 10 minute conversations while smoking with coworkers than entire week long seminars.

  • Good example and well explained. We should team up on a book on science for lay people!

    Your point about specifying the null hypothesis and the p value is very important. Another way studies can fail is if you pick 20 different variables, like you mentioned, and then look to see if any of them give you p<0.05. So in your example, we measure smiling and 19 other factors besides being told jokes. Let’s say the weather, the day of the week, what color clothes the person is wearing, what they had for breakfast, etc. Again, due to statistics, one of those 20 is going to appear relevant by chance. You’re essentially doing 20 experiments in one so again you’ll get one spurious result that you can report as “success”.

    Experimental design is tough and it’s hard to grok until you’ve had to design and run your own experiment including the math. That makes it easy for people to pass off bad science as legitimate, whether accidentally or on purpose. And it’s why peer review is important, where your study gets sent to another researcher in your field for critique before publication.

    There’s other things besides bad math that can trip you up like correlation vs causation, and how the data is gathered. In the above example, you might try to save money by asking subjects to self report on their smiling. But people are bad at doing that due to fallible memory and bias (did that really count as a full smile?). Ideally you want to follow them around and count yourself, with a clear definition of what counts as a smile. Or make them wear a camera that does facial recognition. But both of those cost more money than just handing someone a piece of paper and a pencil and hoping for the best. That’s why you should always be extra suspicious of studies that use self reporting. As my social psych prof said, surveys are the worst form of data collection. It’s what makes polling hard because what people say and what they do are often entirely different things.

  • Reminds me of reading the print version of Infinite Jest by David Foster Wallace, where you needed one bookmark for the novel and another for the endnotes, which made up like 20% of the book. Hopefully e-readers make that easier now.

  • I like that and will start using it. We’re all pretty helpless after birth and before death, so being able bodied is just a temporary phase in the middle, for those lucky enough to not be born with a disability or acquire one in the middle of life.

  • Being poor shaves 13 points off your IQ due to the stress and extra cognitive load of having to make these tough decisions for every little thing. Those 13 points come back should you be lucky enough to improve your station in life. Meanwhile the loss of brainpower increases the likelihood of bad decisions that make your life worse and the cycle continues.

    https://www.reuters.com/article/idUSBRE97S10Y/

  • P<0.05 means the chance of this result being a statistical fluke is less than 0.05, or 1 in 20. It’s the most common standard for being considered relevant, but you’ll also see p<0.01 or smaller numbers if the data shows that the likelihood of the results being from chance are smaller than 1 in 20, like 1 in 100. The smaller the p value the better but it means you need larger data sets which costs more money out of your experiment budget to recruit subjects, buy equipment, and pay salaries. Gotta make those grant budgets stretch so researchers will go with 1 in 20 to save money since it’s the common standard.

  • P<0.05 means one in 20 studies are relevant just by chance. If you have 20 researchers studying the same thing then the 19 researchers who get non significant results don’t get published and get thrown in the trash and the one that gets a “result” sees the light of day.

    Thats why publishing negative results is important but it’s rarely done because nobody gets credit for a failed experiment. Also why it’s important to wait for replication. One swallow does not make a summer no matter how much breathless science reporting happens whenever someone announces a positive result from a novel study.

    TL;DR - math is hard