Covid-19 detection using modified xception transfer learning approach from computed tomography images

(1) * Kenan Morani Mail (Izmir Democracy University, Izmir, Turkey)
(2) Esra Kaya Ayana Mail (Yildiz Technical University, Istanbul, Turkey)
(3) Devrim Unay Mail (Izmir Democracy University, Izmir, Turkey)
*corresponding author

Abstract


The significance of efficient and accurate diagnosis amidst the unique challenges posed by the COVID-19 pandemic underscores the urgency for innovative approaches. In response to these challenges, we propose a transfer learning-based approach using a recently annotated Computed Tomography (CT) image database. While many approaches propose an intensive data preprocessing and/or complex model architecture, our method focuses on offering an efficient solution with minimal manual engineering. Specifically, we investigate the suitability of a modified Xception model for COVID-19 detection. The method involves adapting a pre-trained Xception model, incorporating both the architecture and pre-trained weights from ImageNet. The output of the model was designed to make the final diagnosis decisions. The training utilized 128 batch sizes and 224x224 input image dimensions, downsized from standard 512x512. No further da processing was performed on the input data. Evaluation is conducted on the 'COV19-CT-DB' CT image dataset, containing labeled COVID-19 and non-COVID-19 cases. Results reveal the method's superiority in accuracy, precision, recall, and macro F1 score on the validation subset, outperforming the VGG-16 transfer model and thus offering enhanced precision with fewer parameters. Furthermore, compared to alternative methods for the COV19-CT-DB dataset, our approach exceeds the baseline approach and other alternatives on the same dataset. Finally, the adaptability of the modified Xception transfer learning-based model to the unique features of the COV19-CT-DB dataset showcases its potential as a robust tool for enhanced COVID-19 diagnosis from CT images.

Keywords


COVID-19 detection; Computed tomography images; Xception; Macro F1 score

   

DOI

https://doi.org/10.26555/ijain.v9i3.1432
      

Article metrics

Abstract views : 580 | PDF views : 95

   

Cite

   

Full Text

Download

References


[1] L. T. Duong, P. T. Nguyen, L. Iovino, and M. Flammini, “Automatic detection of Covid-19 from chest X-ray and lung computed tomography images using deep neural networks and transfer learning,” Appl. Soft Comput., vol. 132, p. 109851, Jan. 2023, doi: 10.1016/j.asoc.2022.109851.

[2] M. Ağralı et al., “DeepChestNet: Artificial intelligence approach for COVID‐19 detection on computed tomography images,” Int. J. Imaging Syst. Technol., vol. 33, no. 3, pp. 776–788, May 2023, doi: 10.1002/ima.22876.

[3] S. Asif, M. Zhao, F. Tang, and Y. Zhu, “A deep learning-based framework for detecting COVID-19 patients using chest X-rays,” Multimed. Syst., vol. 28, no. 4, pp. 1495–1513, Aug. 2022, doi: 10.1007/s00530-022-00917-7.

[4] G. Li, R. Togo, T. Ogawa, and M. Haseyama, “COVID-19 detection based on self-supervised transfer learning using chest X-ray images,” Int. J. Comput. Assist. Radiol. Surg., vol. 18, no. 4, pp. 715–722, Dec. 2022, doi: 10.1007/s11548-022-02813-x.

[5] N. D. Kathamuthu et al., “A deep transfer learning-based convolution neural network model for COVID-19 detection using computed tomography scan images for medical applications,” Adv. Eng. Softw., vol. 175, p. 103317, Jan. 2023, doi: 10.1016/j.advengsoft.2022.103317.

[6] W. Msemburi, A. Karlinsky, V. Knutson, S. Aleshin-Guendel, S. Chatterji, and J. Wakefield, “The WHO estimates of excess mortality associated with the COVID-19 pandemic,” Nature, vol. 613, no. 7942, pp. 130–137, Jan. 2023, doi: 10.1038/s41586-022-05522-2.

[7] K. Alyafei, R. Ahmed, F. F. Abir, M. E. H. Chowdhury, and K. K. Naji, “A comprehensive review of COVID-19 detection techniques: From laboratory systems to wearable devices,” Comput. Biol. Med., vol. 149, p. 106070, Oct. 2022, doi: 10.1016/j.compbiomed.2022.106070.

[8] P. Kora et al., “Transfer learning techniques for medical image analysis: A review,” Biocybern. Biomed. Eng., vol. 42, no. 1, pp. 79–107, Jan. 2022, doi: 10.1016/j.bbe.2021.11.004.

[9] S. Asif, Y. Wenhui, K. Amjad, H. Jin, Y. Tao, and S. Jinhai, “Detection of COVID-19 from chest X-ray images: Boosting the performance with convolutional neural network and transfer learning,” Expert Syst., vol. 40, no. 1, p. e13099, Jan. 2023, doi: 10.1111/exsy.13099.

[10] D. Kollias, A. Arsenos, and S. Kollias, “AI-MIA: COVID-19 Detection and Severity Analysis Through Medical Imaging,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13807 LNCS, Springer Science and Business Media Deutschland GmbH, 2023, pp. 677–690, doi: 10.1007/978-3-031-25082-8_46.

[11] D. Kollias, A. Arsenos, L. Soukissian, and S. Kollias, “MIA-COV19D: COVID-19 Detection through 3-D Chest CT Image Analysis,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Oct. 2021, vol. 2021-Octob, pp. 537–544, doi: 10.1109/ICCVW54120.2021.00066.

[12] D. Kollias et al., “Deep Transparent Prediction through Latent Representation Analysis,” Work. Eur. Conf. Artif. Intell. 2020, pp. 1–16, Sep. 2020, Accessed: Sep. 05, 2022. [Online]. Available at: https://arxiv.org/abs/2009.07044v2.

[13] D. Kollias et al., “Transparent Adaptation in Deep Medical Image Diagnosis,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12641 LNAI, Springer Science and Business Media Deutschland GmbH, 2021, pp. 251–267, doi: 10.1007/978-3-030-73959-1_22.

[14] D. Kollias, A. Tagaris, A. Stafylopatis, S. Kollias, and G. Tagaris, “Deep neural architectures for prediction in healthcare,” Complex Intell. Syst., vol. 4, no. 2, pp. 119–131, Jun. 2018, doi: 10.1007/s40747-017-0064-6.

[15] A. Arsenos, D. Kollias, and S. Kollias, “A Large Imaging Database and Novel Deep Neural Architecture for Covid-19 Diagnosis,” in 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), Jun. 2022, pp. 1–5, doi: 10.1109/IVMSP54334.2022.9816321.

[16] R. Turnbull, “Using a 3D ResNet for Detecting the Presence and Severity of COVID-19 from CT Scans,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13807 LNCS, Springer Science and Business Media Deutschland GmbH, 2023, pp. 663–676, doi: 10.1007/978-3-031-25082-8_45.

[17] W. Tan, Q. Yao, and J. Liu, “Two-Stage COVID19 Classification Using BERT Features,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13807 LNCS, Springer Science and Business Media Deutschland GmbH, 2023, pp. 517–525, doi: 10.1007/978-3-031-25082-8_34.

[18] T. Anwar, “COVID19 Diagnosis using AutoML from 3D CT scans,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Oct. 2021, vol. 2021-Octob, pp. 503–507, doi: 10.1109/ICCVW54120.2021.00061.

[19] “ReduceLROnPlateau,” Keras.io. Accessed : Sep. 05, 2022. [Online]. Available at: https://keras.io/api/callbacks/reduce_lr_on_plateau/.

[20] A. Ye, “A Deep Dive into Keras,” in Modern Deep Learning Design and Application Development, Berkeley, CA: Apress, 2022, pp. 1–48, doi: 10.1007/978-1-4842-7413-2_1.

[21] S. Kim, H. Wimmer, and J. Kim, “Analysis of Deep Learning Libraries: Keras, PyTorch, and MXnet,” in 2022 IEEE/ACIS 20th International Conference on Software Engineering Research, Management and Applications (SERA), May 2022, pp. 54–62, doi: 10.1109/SERA54885.2022.9806734.

[22] A. Kapoor, A. Gulli, and S. Pal, “Deep Learning with TensorFlow and Keras: Build and deploy supervised, unsupervised, deep, and reinforcement learning models, ” Packt Publishing Ltd 2022, p. 651. [Online]. Available at : https://books.google.co.id/books? .

[23] C. Kim et al., “DESEM: Depthwise Separable Convolution-Based Multimodal Deep Learning for In-Game Action Anticipation,” IEEE Access, vol. 11, pp. 46504–46512, 2023, doi: 10.1109/ACCESS.2023.3271282.

[24] B. Das, A. Saha, and S. Mukhopadhyay, “Rain Removal from a Single Image Using Refined Inception ResNet v2,” Circuits, Syst. Signal Process., vol. 42, no. 6, pp. 3485–3508, Jun. 2023, doi: 10.1007/s00034-022-02279-x.

[25] Y. Ouzar, D. Djeldjli, F. Bousefsaf, and C. Maaoui, “X-iPPGNet: A novel one stage deep learning architecture based on depthwise separable convolutions for video-based pulse rate estimation,” Comput. Biol. Med., vol. 154, p. 106592, Mar. 2023, doi: 10.1016/j.compbiomed.2023.106592.

[26] G. Learning, “Everything you need to know about VGG16,” Great Learning, 2021. Accessed : Sep. 05, 2022. [Online]. Available at: https://medium.com/@mygreatlearning/everything-you-need-to-know-about-vgg16-7315defb5918 .

[27] A. Singh, A. Pandey, M. Rakhra, D. Singh, G. Singh, and O. Dahiya, “An Iris Recognition System Using CNN & VGG16 Technique,” in 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Oct. 2022, pp. 1–6, doi: 10.1109/ICRITO56286.2022.9965172.

[28] S. K. Aityan, “Confidence Intervals,” in Business Research Methodology, Springer, Cham, 2022, pp. 233–277, doi: 10.1007/978-3-030-76857-7_13.

[29] K. Takahashi, K. Yamamoto, A. Kuchiba, and T. Koyama, “Confidence interval for micro-averaged F1 and macro-averaged F1 scores,” Appl. Intell., vol. 52, no. 5, pp. 4961–4972, Mar. 2022, doi: 10.1007/s10489-021-02635-5.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0