Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FI
Posts
0
Comments
124
Joined
1 yr. ago

puns

Jump
  • Meanwhile psychologists just name things as exactly blandly as they can. There's a neat phenomenon where a relationship can immediately be viewed as deeper and more connected, merely by one of the individuals sharing deeply personal information. It even works at the very first interaction. In other words, if someone tends to overshare, or blurt out info about themselves, we measure their blirtasiousness and its effect on relationships. Not even kidding. I think the folks who came up with it were Scottish, which is why the blirt rather than blurt.

  • What are you talking about? I constantly explain the calculus of the flow rate in the push IV drug I'm giving by going through the (pi)r^2 * h of the syringe, with emphasis on the dh/dy. All my patients love hearing it. They constantly thank me as I finish giving them the dilaudid.

  • Well, if they were stinky I'd probably be upset. If their hands were sticky, I'd be upset. Repeat for the other social offenses. Otherwise, sure, go for it. We all need a case of mistaken identity in our lives.

  • rule

    Jump
  • There was an off brand selling something called maple cremes. Cookies were in the shape of maple leaves, and the frosting in the center was just a touch off of brown sugar goo. They were good.

  • Fine then: "Linux is not Unix, Xerxes!"

    Imagine a very irate spartan shouting it as he hurls his spear across the room where the lawyers are having their discussion about the lawsuit pending between the linux loving spartans and the tyrannical unix using persians.

  • I think most science books are understandable by laypersons, except those that are memorization heavy, like biochemistry, or organic chemistry, or some parts of things like microbiology and pathophysiology. Statistics books and research design were pretty understandable, except for the actual math, heh. There really needs to be a push for people to read them casually, and encouraged to just stick to the concept parts and ignore the math and memorization of minor stuff. The free textbooks out there (I think openstax is pretty good, personally) are getting to the point where I think people might read them just for the 'ooh' part of science. Heck, it's why psychology is such an enticing subject in the first place; it's basically the degree of human interest facts.

    I just thought that understanding the way the null hypothesis is used is important to really grasp what information the p is really conveying.

    :D And for the parts about self reporting bias, and definitions and such, I was really, really having to hold myself back from talking about what makes your variables independent or dependent, operational definitions, ANOVA and MANOVA and t-tables and Cohen's D value and the emphasis on not p but now the error bars and all the other lovely goodies. The stuff really brings me back, eh? ;)

  • To expand on the other fella's explanation:

    In psychology especially, and some other fields, the 'null hypothesis' is used. That means that the researcher 'assumes' that there is no effect or difference in what he is measuring. If you know that the average person smiles 20 times a day, and you want to check if someone (person A) making jokes around a person (person B) all day makes person B smile more than average, you assume that there will be no change. In other words, the expected outcome is that person B will still smile 20 times a day.

    The experiment is performed and data collected. In this example, how many times person B smiled during the day. Do that for a lot of people, and you have your data set. Let's say that they discovered the average amount of smiles per day was 25 during the experimental procedure. Using some fancy statistics (not really fancy, but it sure can seem like it) you calculate the probability that you would get an average of 25 smiles a day if the assumption that making jokes around a person would not change the 20-per-day average. The more people that you experimented on, and the larger the deviance from the assumed average, the lower the probability. If the probability is less than 5%, you say that p<0.05, and for a research experiment like the one described above, that's probably good enough for your field to pat you on the back and tell you that the 'null hypothesis' of there being no effect from your independent variable (the making jokes thing) is wrong, and you can confidently say that making jokes will cause people to smile more, on average.

    If you are being more rigorous, or testing multiple independent variables at once, as you might for examining different therapies or drugs, you starting making your X smaller in the p<X statement. Good studies will predetermine what X they will use, so as to avoid making the mistake of settling on what was 'good enough' as a number that fits your data.