When we look at passing scores, is there any way to quantitatively grade them for magnitude?
Not all bad advice is created equal.
The grading is a mess. It goes about qualitative, quantitative... and statistical corrections "to make it fair".
Anyway, there is ~30% margin on the scores for passing, so chances are that 9% is better than the worst doctor who still "passed".
It's not reliable. The name itself is misleading. The "evidence" is apparently already open. The article doesn't seem to say whether the statistical model is open. My guess would be no.
Article about an AI that aims to give treatment suggestions to doctors, with some alarming results.