Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BJ
Posts
0
Comments
275
Joined
2 yr. ago

  • Brew day is ~8 hours, I would say it's half nannying, there's usually 2 hours where you can full on walk away, but the rest is either active cleaning or you have to press a button or stir a thing every 10 minutes so you are glued to your pot

    Bottling is another ~2 hours or so (sanitizing bottles and capping them, cleaning the used fermenter) - you can cut this down to half an hour if you forego bottling, but that's another $1500 in capital costs for kegging equipment

  • Yes, but that is also contingent on you placing absolutely zero value on your time.

    An absolute bottom of the barrel recipe (10lb 2 row, 1lb c-10, 1oz hallertau, s-04) will run you about $30-40 per 20L batch. So after you spend hundreds of dollars on equipment, you are only saving like $40 per 10 hours spent brewing

  • The thing that makes Everest dangerous is the air pressure combined with everything else, that's why it's such a long slow ascent, you have to acclimatize so your lungs don't implode.

    In the summer the summit temperature is about the same as a bad Canadian prairie winter day

  • Chrome implements features that aren't standards track into their browser, and lazy/oblivious devs use these features to build their products - only to realize wayyy too late it won't work in Safari/Firefox because it uses APIs that are chrome only

  • This is disingenuous on OPs part.

    All LTS releases get 5 years of updates. Ubuntu pro (which is free for non-commercial users FYI) extends the LTS support window to 10 years, which is 5 years more than any other Linux distribution I know of

  • I mean I'm speaking from first hand experience in academia. Like I mentioned, this obviously isn't the case for people running prohibitively costly experiments, but is absolutely the case for teams where acquiring more data just means throwing a few more weeks of time at the lab, the grunt work is being done by the students usually anyways. There are a lot more labs in existence that consist of just a PI and 5-10 grad students/post-docs than there are mega labs working cern.

    There were a handful of times I remember rerunning an experiment that was on the cusp, either to solidify a result or to rule out a significant finding that I correctly suspected was just luck - what is another 3 weeks of data collection when you are spending up to a year designing/planning/iterating/writing to get the publication?

  • the danger is that valuable data from studies straddling the arbitrary p=0.05 line is simply being discarded by researchers

    Or maybe experimenters are opting to do further research themselves rather than publish ambiguous results. If you aren't doing MRI or costly field work, fine tuning your experimental design to get a conclusive result is a more attractive option than publishing a null result that could be significant, or a significant result that you fear might need retracting later.

  • To elaborate a bit more, there is the MySQL resource usage and the docker overhead. If you run two containers that are the same, the docker overhead will only ding you once, but the actual MySQL process will consume its own CPU and memory inside each container.

    So by running two containers you are going to be using an extra couple hundred MB of RAM (whatever MySQL's minimum memory footprint is)

  • Every time there is a transaction the sender's funds are mixed together with a bunch of other senders, and the recipients receive their money from this random pool, so there is no direct association between sender/receiver