Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MY
Posts
0
Comments
103
Joined
2 yr. ago

  • Here’s a fair explanation, but in short there are different kinds of angels described in the abrahamic texts and traditions, and only some of them resemble the classical “human with a pair of wings” depiction. Many wings, many eyes, chimeric fusions, elemental fire and light, and non-biological (to our understanding) forms are among the features of those descriptions.

  • It was settled mainly by Puritans, a Calvinist flavor of Christians that thought the Church of England was too Catholic. If you’ve heard the term “puritanical” it comes from them.

    The pilgrims specifically, were the sect that was the first to land in Massachusetts, and sought to break away from the Church of England.

  • That is why there are second opinions and ultimately it is the patient's choice even if two doctors agree.

    I 100% agree with this, but it wasn’t the question.

    The question was whether parents can override the choice of patient when the patient’s choice is supported by a doctor’s recommendation. And more specifically, whether parents can deny a reversible puberty delaying hormone treatment against the patient’s wishes and force the patient to undergo puberty against their will.

  • I think with a human operator, we can be proactive. A person can be informed of bias, learn to recognize it, and even attempt to compensate for their own.

    An AI model is working off of aggregate past data that we already know is biased. There is currently no proactive anti bias training that can be done to a AI model without massively altering the dataset, which, at some level of alteration, loses its value as true to life data.

    Secondly, AI is a black box. we can’t see inner the workings of the model and determine what types of associations it is making to come to its result. So we don’t even know what part of the dataset would need to be altered to address the bias.

    Lastly, the default assumption by end users will be, unless there are glaring defects, that any individual result is correct and unbiased, because “AI was made by smart people and data, and data doesn’t lie.” And because interrogating and validating the result defeats the whole purpose of using AI to cut out those steps of the process.