Repositorio Dspace

A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods

Mostrar el registro sencillo del ítem

dc.contributor.author Ullah, Naeem
dc.contributor.author Guzmán-Aroca, Florentina
dc.contributor.author Martínez-Álvarez, Francisco
dc.contributor.author De-Falco, Ivanoe
dc.contributor.author Sannino, Giovanna
dc.date.accessioned 2026-03-10T11:49:27Z
dc.date.available 2026-03-10T11:49:27Z
dc.date.issued 2025-10
dc.identifier.citation Ullah N, Guzmán-Aroca F, Martínez-Álvarez F, De Falco I, Sannino G. A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods. Medical Image Analysis. octubre de 2025;105:103665. doi:10.1016/j.media.2025.103665
dc.identifier.issn 1361-8415
dc.identifier.uri https://sms.carm.es/ricsmur/handle/123456789/25232
dc.description.abstract Artificial intelligence and deep learning are powerful tools for extracting knowledge from large datasets, particularly in healthcare. However, their black-box nature raises interpretability concerns, especially in high-stakes applications. Existing eXplainable Artificial Intelligence methods often focus solely on visualization or rule-based explanations, limiting interpretability's depth and clarity. This work proposes a novel explainable AI method specifically designed for medical image analysis, integrating statistical, visual, and rule-based explanations to improve transparency in deep learning models. Statistical features are derived from deep features extracted using a custom Mobilenetv2 model. A two-step feature selection method - zero-based filtering with mutual importance selection - ranks and refines these features. Decision tree and RuleFit models are employed to classify data and extract human-readable rules. Additionally, a novel statistical feature map overlay visualization generates heatmap-like representations of three key statistical measures (mean, skewness, and entropy), providing both localized and quantifiable visual explanations of model decisions. The proposed method has been validated on five medical imaging datasets - COVID-19 radiography, ultrasound breast cancer, brain tumor magnetic resonance imaging, lung and colon cancer histopathological, and glaucoma images - with results confirmed by medical experts, demonstrating its effectiveness in enhancing interpretability for medical image classification tasks.
dc.language.iso eng
dc.publisher ELSEVIER
dc.rights Atribución/Reconocimiento 4.0 Internacional
dc.rights.uri https://creativecommons.org/licenses/by/4.0/deed.es
dc.subject.mesh Humans
dc.subject.mesh COVID-19/diagnostic imaging
dc.subject.mesh Artificial Intelligence
dc.subject.mesh Deep Learning
dc.subject.mesh Image Interpretation, Computer-Assisted/methods
dc.subject.mesh Image Processing, Computer-Assisted/methods
dc.subject.mesh Diagnostic Imaging/methods
dc.subject.mesh Magnetic Resonance Imaging
dc.title A novel explainable AI framework for medical image classification integrating statistical, visual, and rule-based methods
dc.type info:eu-repo/semantics/article
dc.identifier.pmid 40505210
dc.relation.publisherversion https://linkinghub.elsevier.com/retrieve/pii/S1361841525002129
dc.type.version info:eu-repo/semantics/publishedVersion
dc.identifier.doi 10.1016/j.media.2025.103665
dc.journal.title Medical Image Analysis
dc.identifier.essn 1361-8423


Ficheros en el ítem

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

Atribución/Reconocimiento 4.0 Internacional Excepto si se señala otra cosa, la licencia del ítem se describe como Atribución/Reconocimiento 4.0 Internacional

Buscar en DSpace


Búsqueda avanzada

Listar

Mi cuenta