Ioannis Mollas, Nick Bassiliades, & Grigorios Tsoumakas, Conclusive local interpretation rules for random forests. Data Mining and Knowledge Discovery, Vol. 36, pp. 1521-1574, 2022.

Author(s): I. Mollas, N. Bassiliades, G. Tsoumakas

Availability:

Appeared In: Data Mining and Knowledge Discovery, Vol. 36, pp. 1521-1574, 2022.

Keywords: Explainable artificial intelligence, Interpretable machine learning, Local interpretation, Model-specific interpretation, Random forests

Tags:

Abstract: In critical situations involving discrimination, gender inequality, economic damage, and even the possibility of casualties, machine learning models must be able to provide clear interpretations of their decisions. Otherwise, their obscure decision-making processes can lead to socioethical issues as they interfere with people’s lives. Random forest algorithms excel in the aforementioned sectors, where their ability to explain themselves is an obvious requirement. In this paper, we present LionForests, which relies on a preliminary work of ours. LionForests is a random forest-specific interpretation technique that provides rules as explanations. It applies to binary classification tasks up to multi-class classification and regression tasks, while a stable theoretical background supports it. A time and scalability analysis suggests that LionForests is much faster than our preliminary work and is also applicable to large datasets. Experimentation, including a comparison with state-of-the-art techniques, demonstrate the efficacy of our contribution. LionForests outperformed the other techniques in terms of precision, variance, and response time, but fell short in terms of rule length and coverage. Finally, we highlight conclusiveness, a unique property of LionForests that provides interpretation validity and distinguishes it from previous techniques.