I. Mollas, N. Bassiliades, G. Tsoumakas, “LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding”, presented at AIMLAI-XKDD Workshop of ECML-PKDD Conference, Würzburg, Germany, 16-20 September 2019.

Author(s): I. Mollas, N. Bassiliades, G. Tsoumakas

Availability:

Appeared In: presented at AIMLAI-XKDD Workshop of ECML-PKDD Conference, Würzburg, Germany, 16-20 September 2019

Keywords: Explainable, Interpretable, Machine Learning, Neural Networks, Autoencoders

Tags:

Abstract: Technological breakthroughs on smart homes, self-driving cars, health care and robotic assistants, in addition to reinforced law regulations, have critically in influenced academic research on explainable machine learning. A sufficient number of researchers have implemented ways to explain indifferently any black box model for classification tasks. A drawback of building agnostic explanators is that the neighbourhood generation process is universal and consequently does not guarantee true adjacency between the generated neighbours and the instance. This paper explores a methodology on providing explanations for a neural network's decisions, in a local scope, through a process that actively takes into consideration the neural network's architecture on creating an instance's neighbourhood, that assures the adjacency among the generated neighbours and the instance. The outcome of performing experiments using this methodology reveals that there is a significant ability in capturing delicate feature importance changes.

See Also: AI4EU Project