I. Mollas, N. Bassiliades, G. Tsoumakas, "Altruist: Argumentative Explanations through Local Interpretations of Predictive Models", 12th Hellenic Conference on Artificial Intelligence (SETN 2022), Sep 7-9, 2022, Corfu, Greece. ACM, New York, NY, USA, Article 21, 1–10.
Explainable AI is an emerging field providing solutions for acquiring insights into automated systems’ rationale. It has been put on the AI map by suggesting ways to tackle key ethical and societal issues. Existing explanation techniques are often not comprehensible to the end user. Lack of evaluation and selection criteria also makes it difficult for the end user to choose the most suitable technique. In this study, we combine logic-based argumentation with Interpretable Machine Learning, introducing a preliminary meta-explanation methodology that identifies the truthful parts of feature importance oriented interpretations. This approach, in addition to being used as a meta-explanation technique, can be used as an evaluation or selection tool for multiple feature importance techniques. Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations.