The Promise and Peril of Explainable AI
magic_lobster_party @ magic_lobster_party @kbin.social Posts 2Comments 580Joined 2 yr. ago
magic_lobster_party @ magic_lobster_party @kbin.social
Posts
2
Comments
580
Joined
2 yr. ago
No mention of Chain of Thought: https://arxiv.org/abs/2201.11903
It has been shown that an LLM can give significantly more accurate answers if it’s instructed to explain its thought process step by step with an example. The main problem is how to generate such examples. Easy to do for simple math problems, but incredibly difficult for medical diagnoses.
Not really explainable AI, because it cannot explain how it came up with its reasoning. It’s still a black box we cannot possibly understand. But there’s a possibility that models learned to explain its conclusions is the future.