The integration of Explainable Artificial Intelligence (XAI) into medical imaging is pivotal in addressing the “black-box” limitations of deep learning models, which often hinder clinical trust and regulatory approval. This review provides a comprehensive examination of XAI techniques that enhance interpretability and transparency in diagnostic imaging applications. Key approaches such as feature visualization (Grad-CAM, Integrated Gradients), attention mechanisms, symbolic reasoning, and example-based methods—are explored alongside their practical implementations. Specific cases in cardiac imaging, cancer diagnostics, and brain lesion segmentation illustrate the value of XAI in improving clinical decision-making and patient care. Moreover, the review highlights major challenges, including the trade-off between accuracy and interpretability, ethical and legal constraints, integration barriers within clinical workflows, and the complexity of medical data. To address these issues, future research directions are proposed, including the development of more robust example-based models, ethical frameworks, generalizable architectures, advanced visualization techniques, and interdisciplinary collaboration. With continued refinement and responsible deployment, XAI systems can enable AI models to become not only accurate but also interpretable and clinically relevant. This paper underscores the transformative potential of XAI in building trustworthy, transparent, and effective AI-driven diagnostic tools aligned with the practical demands of modern healthcare systems.
SUBMITTED: 19 May 2025
ACCEPTED: 26 June 2025
PUBLISHED:
1 July 2025
SUBMITTED to ACCEPTED: 38 days