Timezone: »

 
[IT2] Explainability and robustness: Towards trustworthy AI
Andreas Holzinger

Tue Dec 14 06:37 AM -- 07:21 AM (PST) @

AI is very successful at certain tasks, even exceeding human performance. Unfortunately, the most powerful methods suffer from both difficulty in explaining why a particular result was obtained and a lack of robustness. Our most powerful machine learning models are very sensitive to even small changes. Perturbations in the input data can have a dramatic impact on the output and lead to completely different results. This is of great importance in virtually all critical areas where we suffer from poor data quality, i.e., we do not have the expected i.i.d. data. Therefore, the use of AI in areas that impact human life (agriculture, climate, health, ...) has led to an increased demand for trustworthy AI. In sensitive areas where traceability, transparency and interpretability are required, explainability is now even mandatory due to regulatory requirements. One possible step to make AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can - sometimes, of course, not always - bring experience, expertise and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the "why" is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.

Author Information

Andreas Holzinger (Medical University Graz)

Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements he was elected a member of Academia Europea in 2019, the European Academy of Science. He is member of the European Laboratory for Learning and Intelligent Systems (ELLIS) since 2021. The use of AI in domains that impact human life (agriculture, climate, health, ….) has led to increased demand for trustworthy AI. Andreas fosters robustness & explainability as enabler for trusted AI and advocates a synergistic approach to put the human-in-control of AI, aligning AI with human values, ethical principles and legal requirements, ensuring privacy, security, and safety.

More from the Same Authors