XDecompo: eXplainable Decomposition Approach in Convolutional Neural Networks for Tumour Image Classification

Abdelsamea, Mohammed M. (2022) XDecompo: eXplainable Decomposition Approach in Convolutional Neural Networks for Tumour Image Classification. Sensors, 22 (24). ISSN 1424-8220

sensors-22-09875.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB)


Of the various tumour types, colorectal cancer and brain tumours are still considered among the most serious and deadly diseases in the world. Therefore, many researchers are interested in improving the accuracy and reliability of diagnostic medical machine learning models. In computer-aided diagnosis, self-supervised learning has been proven to be an effective solution when dealing with datasets with insufficient data annotations. However, medical image datasets often suffer from data irregularities, making the recognition task even more challenging. The class decomposition approach has provided a robust solution to such a challenging problem by simplifying the learning of class boundaries of a dataset. In this paper, we propose a robust self-supervised model, called XDecompo, to improve the transferability of features from the pretext task to the downstream task. XDecompo has been designed based on an affinity propagation-based class decomposition to effectively encourage learning of the class boundaries in the downstream task. XDecompo has an explainable component to highlight important pixels that contribute to classification and explain the effect of class decomposition on improving the speciality of extracted features. We also explore the generalisability of XDecompo in handling different medical datasets, such as histopathology for colorectal cancer and brain tumour images. The quantitative results demonstrate the robustness of XDecompo with high accuracy of 96.16% and 94.30% for CRC and brain tumour images, respectively. XDecompo has demonstrated its generalization capability and achieved high classification accuracy (both quantitatively and qualitatively) in different medical image datasets, compared with other models. Moreover, a post hoc explainable method has been used to validate the feature transferability, demonstrating highly accurate feature representations.

Item Type: Article
Identification Number: https://doi.org/10.3390/s22249875
12 December 2022Accepted
15 December 2022Published Online
Uncontrolled Keywords: explainable artificial intelligence, convolutional neural networks, medical images classification, unsupervised pre-training, data irregularities
Subjects: CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science
Divisions: Faculty of Computing, Engineering and the Built Environment > School of Engineering and the Built Environment
Depositing User: Mohammed Abdelsamea
Date Deposited: 21 Dec 2022 10:45
Last Modified: 21 Dec 2022 10:45
URI: https://www.open-access.bcu.ac.uk/id/eprint/14035

Actions (login required)

View Item View Item


In this section...