I. Mollas, N. Bassiliades, G. Tsoumakas, LioNets: a neural-specific local interpretation technique exploiting penultimate layer information. Applied Intelligence, Vol. 53, pp. 2538-2563, 2023.
Artificial Intelligence (AI) is having an enormous impact on the rise of technology in every sector. Indeed, AI-powered systems are monitoring and deciding on sensitive economic and societal issues. The future is moving towards automation, and we must not prevent it. Many people, though, have opposing views because of the fear of uncontrollable AI systems. This concern could be reasonable if it originated from considerations associated with social issues, like gender-biased or obscure decision-making systems. Explainable AI (XAI) is a tremendous step towards reliable systems, enhancing the trust of people in AI. Interpretable machine learning (IML), a subfield of XAI, is also an urgent topic of research. This paper presents a small but significant contribution to the IML community. We focus on a local-based, neural-specific interpretation process applied to textual and time series data. Therefore, the proposed technique, which we call “LioNets”, introduces novel approaches to present feature importance-based interpretations. We propose an innovative way to produce counterfactual words in textual datasets. Through a set of quantitative and qualitative experiments, we present competitiveness of LioNets compared to other techniques and suggest its usefulness.