LLMs factor in unrelated information when recommending medical treatments
LLMs factor in unrelated information when recommending medical treatments

LLMs factor in unrelated information when recommending medical treatments

LLMs factor in unrelated information when recommending medical treatments
LLMs factor in unrelated information when recommending medical treatments
Say it with me, now: chatgpt is not a doctor.
Now, louder for the morons in the back. Altman! Are you listening?!
ChatGPT is not a doctor. But models trained on imaging can actually be a very useful tool for them to utilize.
Even years ago, just before the AI âboomâ, they were asking doctors for details on how they examine patient images and then training models on that. They found that the AI was âbetterâ than doctors specifically because it followed the doctorâs advice 100% of the time; thereby eliminating any kind of bias from the doctor that might interfere with following their own training.
Of course, the splashy headline âAI better than doctorsâ was ridiculous. But it does show the benefit of having a neutral tool for doctors to utilize, especially when looking at images for people who are outside of the typical demographics that much medical training is based on. (As in mostly just white men. For example, everything they train doctors on regarding knee imagining comes from images of the knees of coal miners in the UK some decades ago)