Study Finds LLMs Biased Against Men in Hiring
Study Finds LLMs Biased Against Men in Hiring

Study Finds LLMs Biased Against Men in Hiring

Study Finds LLMs Biased Against Men in Hiring
Study Finds LLMs Biased Against Men in Hiring
I dunno why people even care about this bullshit pseudo-science. The study is dumb AF. The dude didn't even use real resumes. He had an LLM generate TEN fake resumes and then the "result" is still within any reasonable margin of error. Reading this article is like watching a clown show.
It's all phony smoke and mirrors. Clickbait. The usual "AI" grift.
I feel as though generating these "fake" resumes is one of the top uses for LLMs. Millions of people are probably using LLMs to write their own resumes, so generating random ones seems on par with reality.
Seems like a normal, sane and totally not-biased source
What the fuck did I just read?
ah, mamdani the guy who dehumanized hindus
and their companies are biased against humans in hiring.
I don't care what bias they do and don't have ; if you use an LLM to select résumés, you don't deserve to hire me. I make my résumé illegible for LLMs on purpose.
( But don't follow my advice. I don't actually need a job so I can pull this kinda nonsense and be selective, most people probably can't )
How do you make it illegible for LLMs?
Add a whole bunch of white on white nonsense ! You can also insert letters in the middle of words with a font size of 0, although that fucks up a human copy-pasting too, so probably not recommended.
The simplest way is to make your CV an image, and include no OCR data (or nonsense OCR data) in the PDF
You write a creative series of deeply offensive curse words in small white on white print.
these systems cannot run a lemonade stand without shitting their balls
Even before LLMs, resumes were processed through keyword filters already. You have to optimize your resume for keyword readers, which should work for LLMs as well.
I use the ARCI model to describe my roles.
So we can use Trump's own anti-DEI bullshit to kill off LLMs now?
Well, ya see, trump isnt racist against computers
They're as biased as the data they were trained on. If that data leaned toward male applicants, then yeah, it makes complete sense.
Would be cool if the Technology community found literally any other topic to discuss beyond AI. I’m really over it, and I don’t care.
Only half kidding now... the way morality and ethics get extrapolated now by the perfection police, this must mean anti-AI = misogynist.
Bias was baked in via RLHF and also existed in the datasets used for training. Reddit cancer grows
So they admit, that there’s a huge bias against women, black people, …
And then they claim it must be a bias against men. Maybe it’s not a bias, maybe it’s the interpretation of studies which found out that there are certain areas where women are better in their jobs than men, and the AI considered those studies despite the bias against women.
Leadership & Management
Study: Harvard Business Review (2019) Finding: Women scored higher than men in 12 out of 16 leadership competencies.
https://hbr.org/2019/06/research-women-score-higher-than-men-in-most-leadership-skills
Medicine
Study 1: JAMA Internal Medicine (2017) Finding: Patients treated by female doctors had lower mortality rates.
https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2593255
Stdy 2: Annals of Internal Medicine (2024, UCLA) Funding: Female patients treated by female doctors had 8.15% mortality vs 8.38% with male doctors (2016–2019 data)
https://www.uclahealth.org/news/release/treatment-female-doctors-leads-lower-mortality-and-hospital
Sales Performance
Source: Xactly Insights (2017) Finding: 86% of women met their sales quotas, vs. 78% of men.
https://www.forbes.com/sites/forbescoachescouncil/2017/03/21/women-in-sales-beating-the-numbers/
Education / Teaching
Source: OECD TALIS Survey Finding: Female teachers report better classroom climate and higher student engagement.
https://www.oecd.org/en/about/programmes/talis.html
Edit: I can see quite a lot of offended men :)
Handpicks poor 'studies' to justify personal belief that women are better.
Handpicked poor study... That's what this whole OP is about.
At least where I'm from it's pretty well known that girls outperform boys in school, possibly because their brains develop slightly earlier in some ways useful to perform in a class room.
This could give women a head start and very well lead to them on average performing better in work life, until they are forced to choose between careers and families while they partners continue to advance their careers at full speed not worrying about being pregnant.
But that's a different discussion. We should avoid biases in hiring because biases suck and make for an unjust society. And we should stop pretending language models make intelligent considerations about anything.
What's fascinating here is that LLMs trained on the texts we produce create the opposite bias of what we observe in society, where men tend to get preferential treatment. My guess is that this is a consequence of inclusive language. In my writing, whenever women are under-represented, I make a point out of defaulting to she and her rather than he and him. I know others do the same. I imagine this could feed into LLMs. Whatever it is that causes this, it sure as fuck isn't anything actually intelligent.
the AI considered
Sorry to break it to you, but the "AI" does not "consider" anything. They are talking about a language prediction model.
the problematic part of this is that you’ve stripped all context to support your, admittedly bigoted, rhetoric and ethos.
black people, generally, have worse education outcomes than whites in american education. you’d still be an incredibly shitty and terrible person if you advocated hiring white people over black people by rote rule. you can find plenty of “studies” that formalize that argument just as you have here, though. essentialist can just say whatever they want, you guys aren’t bounded by rational thought and critical thinking like the rest of us. no, arguing considering context would be too hard. you’d rather just sort people into nice little easy bins, wouldn’t you?
no, i think most rational people understand that in a scenario like this all people have, on average, the same basic cognitive faculties and potential, and would then proceed to advocate for improving the educational conditions for groups that are falling behind not due to their own nature, but due to the system they are in.
but idk, i’m not a bigot so maybe my brain just implicitly rejects the idea “X people are worse/less intelligent/etc than Y people”
fucking think about what you’re saying. there is no “right people” to hate other than the rich and powerful. it isn’t a subversion of the feminist message to admit this. in fact, it makes you a better feminist. real feminist aren’t sexist.
can you imagine if you said this in a racial context and then you made an edit like “edit: can see i offended a lot of darkies with this :)”… are you dense? can you not see how you are engaging in the same kind of thought that oppressed you and likely spurred you towards feminism in the first place? except you don’t understand that what you do is patently unfeminist and makes the world a worse place. i can honestly say i fucking despise bigots, including people just like you.
This isn't exactly a comprehensive literature review, and totally misunderstands what a LLM is and does
Right. If it's true that women statistically outperform men (with same application documents), it'd be logical to prefer them just on gender alone. Because they likely turn out to be better.
LLMs reproducing stereotypes is a well researched topic. They do that due to what they are. Stereotypes and bias in (in the training data), bias and stereotypes out. That's what they're meant to do. And all AI companies have entire departments to tune that, measure the biases and then fine-tune it to whatever they deem fit.
I mean the issue aren't women or anything, it's using AI for hiring in the first place. You do that if you want whatever stereotypes Anthropic and OpenAI gave to you.
Just pattern recognition in the end, and extrapolating from that sample size.
Issue is they probably want to pattern-recognize something like merit / ability / competence here. And ignore other factors. Which is just hard to do.