I. Mollas, N. Bassiliades, G. Tsoumakas, "Truthful meta-explanations for local interpretability of machine learning models", Applied Intelligence, 53, pp. 26927–26948, 2023.

Author(s): I. Mollas, N. Bassiliades, G. Tsoumakas

Availability:

Appeared In: Applied Intelligence, 53, pp. 26927–26948, 2023.

Keywords: Explainable artificial intelligence, Interpretable machine learning, Local interpretation, Meta-explanations, Evaluation, Argumentation

Tags:

Abstract: Automated Machine Learning-based systems’ integration into a wide range of tasks has expanded as a result of their performance and speed. Although there are numerous advantages to employing ML-based systems, if they are not interpretable, they should not be used in critical or high-risk applications. To address this issue, researchers and businesses have been focusing on finding ways to improve the explainability of complex ML systems, and several such methods have been developed. Indeed, there are so many developed techniques that it is difficult for practitioners to choose the best among them for their applications, even when using evaluation metrics. As a result, the demand for a selection tool, a meta-explanation technique based on a high-quality evaluation metric, is apparent. In this paper, we present a local meta-explanation technique which builds on top of the truthfulness metric, which is a faithfulness-based metric. We demonstrate the effectiveness of both the technique and the metric by concretely defining all the concepts and through experimentation.