N. Tsolakis, C. Maga-Nteve, S. Vrochidis, N. Bassiliades, G. Meditskos, “Enhancing Cardiac AI Explainability through Ontology-based Evaluation”, Proc. 15th International Conference on Information, Intelligence, Systems and Applications (IISA2024), 17–19 July 2024, Chania, Greece, IEEE, pp. 1-4.
Deep learning, Explainable AI, Prevention and mitigation, Data preprocessing, Focusing, Medical services, Ontologies, Data models, Complexity theory, Cardiology
The increasing utilization of Deep Learning models in healthcare and cardiology necessitates robust explainability to ensure trust, understandability and bias mitigation. Especially in cardiology, where life-critical decisions should be made, explainable Artificial Intelligence (XAI) solutions play a pivotal role. As of now, the utility of the Artificial Intelligence (AI) advancements in explainable AI is limited by the complexity of the explanations provided and the lack of applicability of explanations in real-world clinical settings. To bridge this gap, we propose a novel ontology-based methodology to evaluate XAI techniques, focusing on cardiac “black-box” models. More specifically, we use ontologies to assess and ensure the clarity and relevance of AI explanations, fostering the development of explainable and trustworthy systems for clinical use. Our solution unfolds in sequential phases, beginning with rigorous data preprocessing, feature engineering and continuing with model architecture design and implementation. The next stage includes the deployment of various XAI methods to produce meaningful explanations, which being evaluated by leveraging an ontology scheme.