Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)JU
Posts
0
Comments
44
Joined
2 wk. ago

  • Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?

    If instead of giving passive aggressive replies you would spend a moment to reflect on what I wrote you would understand that ChatGPT reflect the reality, including any bias. In short the answer is yes with high probability.

  • LLMs do not give the correct answer, just the most probable sequence of words based on the training.

    That kind of studies (because there are hundreds) highlight two things:

    1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.

    In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.

  • Almost got fired once when a close colleague spread rumors to put her failures on me. I never got to know precisely what she said but it was "extremely bad" and in the realm of harassment (not sexual, but still...).

    The management was not sure and did not involve HR to make it formal. I felt under scrutiny for a while, so I kept all the communication to a minimum, strictly professional, not even an emoji or a joke about the weather, all in writing when possible and including other people every time it was possible. It was horrible and stressful. I considered to quit or to ask to be moved since interacting with her was part of a daily routine, but I feared that it could be seen as an admission of guilt.

    Eventually she was fired in a round of layoffs and that was the end of it. Later I discovered from some colleagues that they never believed that shit, but nobody stepped in to say anything.

  • I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.

    However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.