A. Vassiliades, N. Bassiliades, T. Patkos, “Argumentation and explainable artificial intelligence: a survey,” The Knowledge Engineering Review, vol. 36, p. e5, 2021.

Author(s): A. Vassiliades, N. Bassiliades, T. Patkos


Appeared In: The Knowledge Engineering Review, vol. 36, p. e5, 2021.

Keywords: Argumentation, Explainability, Agents, Machine Learning, Logic Programming


Abstract: Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision, it can provide reasoning over uncertainty, and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, towards more interpretable predictive models.