Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: eXplainable AI approaches for debugging and diagnosis

[IT2] Explainability and robustness: Towards trustworthy AI

Andreas Holzinger


Abstract:

AI is very successful at certain tasks, even exceeding human performance. Unfortunately, the most powerful methods suffer from both difficulty in explaining why a particular result was obtained and a lack of robustness. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output and lead to completely different results. This is of great importance in virtually all critical areas where we suffer from poor data quality, i.e., we do not have the expected i.i.d. data. Therefore, the use of AI in areas that impact human life (agriculture, climate, health, ...) has led to an increased demand for trustworthy AI. In sensitive areas where traceability, transparency and interpretability are required, explainability is now even mandatory due to regulatory requirements. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can - sometimes, of course, not always - bring experience, expertise and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the "why" is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.