Poster
in
Affinity Workshop: Black in AI
Towards trustworthy AI-based algorithms in healthcare: A case of medical images
Mbangula Lameck Amugongo
Keywords: [ ethics ]
Over the last decade, there has been a lot of artificial intelligence (AI)-based solutions proposed in healthcare. However, only a few of the solutions are clinically in use. Lack of trust in healthcare AI-based solutions is tied to the technical characteristics of AI, and how these properties can be understood clinically or biologically. Explainable AI (XAI) can improve the interpretability of AI-based solutions, providing qualitative and quantitative reasons for how AI models make their decisions. In this study, we compare XAI tools: Shapely Addictive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (Lime). Finally, propose linking quantitative imaging features to biology.