Privacy-Preserving U-Net Variants with pseudo-labeling for radiolucent lesion segmentation in dental CBCT

(1) * Amelia Ritahani Ismail Mail (Department of Computer Science, International Islamic University Malaysia, Malaysia)
(2) Faris Farhan Azlan Mail (Department of Computer Science, International Islamic University Malaysia, Malaysia)
(3) Khairul Akmal Noormaizan Mail (Department of Computer Science, International Islamic University Malaysia, Malaysia)
(4) Nurul Afiqa Mail (Department of Computer Science, International Islamic University Malaysia, Malaysia)
(5) Syed Qamrun Nisa Mail (Department of Computer Science, International Islamic University Malaysia, Malaysia)
(6) Ahmad Badaruddin Ghazali Mail (Department of Oral Maxillofacial Surgery & Oral Diagnosis, International Islamic University Malaysia, Malaysia)
(7) Andri Pranolo Mail (Universitas Ahmad Dahlan, Indonesia)
(8) Shoffan Saifullah Mail (AGH University of Science and Technology, Poland)
*corresponding author

Abstract


Accurate segmentation of radiolucent lesions in dental Cone-Beam Computed Tomography (CBCT) is vital for enhancing diagnostic reliability and reducing the burden on clinicians. This study proposes a privacy-preserving segmentation framework leveraging multiple U-Net variants—U-Net, DoubleU-Net, U2-Net, and Spatial Attention U-Net (SA-UNet)—to address challenges posed by limited labeled data and patient confidentiality concerns. To safeguard sensitive information, Differential Privacy Stochastic Gradient Descent (DP-SGD) is integrated using TensorFlow-Privacy, achieving a privacy budget of ε ≈ 1.5 with minimal performance degradation. Among the evaluated architectures, U2-Net demonstrates superior segmentation performance with a Dice coefficient of 0.833 and an Intersection over Union (IoU) of 0.881, showing less than 2% reduction under privacy constraints. To mitigate data annotation scarcity, a pseudo-labeling approach is implemented within an MLOps pipeline, enabling semi-supervised learning from unlabeled CBCT images. Over three iterative refinements, the pseudo-labeling strategy reduces validation loss by 14.4% and improves Dice score by 2.6%, demonstrating its effectiveness. Additionally, comparative evaluations reveal that SA-UNet offers competitive accuracy with faster inference time (22 ms per slice), making it suitable for low-resource deployments. The proposed approach presents a scalable and privacy-compliant framework for radiolucent lesion segmentation, supporting clinical decision-making in real-world dental imaging scenarios.

   

DOI

https://doi.org/10.26555/ijain.v11i2.1529
      

Article metrics

Abstract views : 1171 | PDF views : 56

   

Cite

   

Full Text

Download

References


[1] A. W. K. Yeung, “Radiolucent Lesions of the Jaws: An Attempted Demonstration of the Use of Co-Word Analysis to List Main Similar Pathologies,” Int. J. Environ. Res. Public Health, vol. 19, no. 4, p. 1933, Feb. 2022, doi: 10.3390/ijerph19041933.

[2] S. Essaket, L. Benjelloun, and S. Chbicheb, “Odontogenic Keratocyst Mimicking a Radicular Cyst,” Integr. J. Med. Sci., vol. 8, p. 256, Jan. 2021, doi: 10.15342/ijms.2021.356.

[3] P. Cimflova, J. M. Ospel, M. Marko, B. K. Menon, and W. Qiu, “Variability assessment of manual segmentations of ischemic lesion volume on 24-h non-contrast CT,” Neuroradiology, vol. 64, no. 6, pp. 1165–1173, Jun. 2022, doi: 10.1007/s00234-021-02855-z.

[4] A. Deshmukh, “Artificial Intelligence in Medical Imaging: Applications of Deep Learning for Disease Detection and Diagnosis,” Univers. Res. Reports, vol. 11, no. 3, pp. 31–36, Jun. 2024, doi: 10.36676/urr.v11.i3.1284.

[5] M. Al-Asali, A. Y. Alqutaibi, M. Al-Sarem, and F. Saeed, “Deep learning-based approach for 3D bone segmentation and prediction of missing tooth region for dental implant planning,” Sci. Rep., vol. 14, no. 1, p. 13888, Jun. 2024, doi: 10.1038/s41598-024-64609-0.

[6] Z. Huang, B. Li, Y. Cheng, and J. Kim, “Odontogenic cystic lesion segmentation on cone-beam CT using an auto-adapting multi-scaled UNet,” Front. Oncol., vol. 14, p. 1379624, Jun. 2024, doi: 10.3389/fonc.2024.1379624.

[7] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9351, Springer, Cham, 2015, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.

[8] M. E. Rayed, S. M. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir, and M. F. Mridha, “Deep learning for medical image segmentation: State-of-the-art advancements and challenges,” Informatics Med. Unlocked, vol. 47, p. 101504, Jan. 2024, doi: 10.1016/j.imu.2024.101504.

[9] H. C. Karabas, I. Ozcan, M. S. Tekkesin, S. A. Tasyapan, B. Guray, and M. M. Atapek, “Evaluation of Radiolucent Lesions Associated with Impacted Teeth: A Retrospective Study,” Curr. Med. Imaging Former. Curr. Med. Imaging Rev., vol. 16, no. 10, pp. 1332–1339, Jan. 2021, doi: 10.2174/1573405616666200206115827.

[10] E. Cotti and E. Schirru, “Present status and future directions: Imaging techniques for the detection of periapical lesions,” Int. Endod. J., vol. 55, no. S4, pp. 1085–1099, Oct. 2022, doi: 10.1111/iej.13828.

[11] A. H. Abdel-Gawad, L. A. Said, and A. G. Radwan, “Optimized Edge Detection Technique for Brain Tumor Detection in MR Images,” IEEE Access, vol. 8, pp. 136243–136259, 2020, doi: 10.1109/ACCESS.2020.3009898.

[12] M. T. Nyo, F. Mebarek-Oudina, S. S. Hlaing, and N. A. Khan, “Otsu’s thresholding technique for MRI image brain tumor segmentation,” Multimed. Tools Appl., vol. 81, no. 30, pp. 43837–43849, Dec. 2022, doi: 10.1007/s11042-022-13215-1.

[13] H. Mittal, A. C. Pandey, M. Saraswat, S. Kumar, R. Pal, and G. Modwel, “A comprehensive survey of image segmentation: clustering methods, performance parameters, and benchmark datasets,” Multimed. Tools Appl., vol. 81, no. 24, pp. 35001–35026, Oct. 2022, doi: 10.1007/s11042-021-10594-9.

[14] M. Juneja, N. Aggarwal, S. K. Saini, S. Pathak, M. Kaur, and M. Jaiswal, “A comprehensive review on artificial intelligence-driven preprocessing, segmentation, and classification techniques for precision furcation analysis in radiographic images,” Multimed. Tools Appl., pp. 1–54, Jul. 2024, doi: 10.1007/s11042-024-19920-3.

[15] I. S. Bayrakdar et al., “Deep-learning approach for caries detection and segmentation on dental bitewing radiographs,” Oral Radiol., vol. 38, no. 4, pp. 468–479, Oct. 2022, doi: 10.1007/s11282-021-00577-9.

[16] H. Yu, L. T. Yang, Q. Zhang, D. Armstrong, and M. J. Deen, “Convolutional neural networks for medical image analysis: State-of-the-art, comparisons, improvement and perspectives,” Neurocomputing, vol. 444, pp. 92–110, Jul. 2021, doi: 10.1016/j.neucom.2020.04.157.

[17] M. Abadi et al., “Deep Learning with Differential Privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Oct. 2016, vol. 24-28-Octo, pp. 308–318, doi: 10.1145/2976749.2978318.

[18] X. Wei et al., “Building Outline Extraction Directly Using the U2-Net Semantic Segmentation Model from High-Resolution Aerial Images and a Comparison Study,” Remote Sens., vol. 13, no. 16, p. 3187, Aug. 2021, doi: 10.3390/rs13163187.

[19] C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, “SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation,” in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 1236–1242, doi: 10.1109/ICPR48806.2021.9413346.

[20] C. Dwork, “Differential Privacy,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 4052 LNCS, Springer, Berlin, Heidelberg, 2006, pp. 1–12, doi: 10.1007/11787006_1.

[21] Z. Zheng, H. Yan, F. C. Setzer, K. J. Shi, M. Mupparapu, and J. Li, “Anatomically Constrained Deep Learning for Automating Dental CBCT Segmentation and Lesion Detection,” IEEE Trans. Autom. Sci. Eng., vol. 18, no. 2, pp. 603–614, Apr. 2021, doi: 10.1109/TASE.2020.3025871.

[22] Y. Chen, M. Mancini, X. Zhu, and Z. Akata, “Semi-Supervised and Unsupervised Deep Visual Learning: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 46, no. 3, pp. 1327–1347, Mar. 2024, doi: 10.1109/TPAMI.2022.3201576.

[23] J. E. van Engelen and H. H. Hoos, “A survey on semi-supervised learning,” Mach. Learn., vol. 109, no. 2, pp. 373–440, Feb. 2020, doi: 10.1007/s10994-019-05855-6.

[24] M. M. John, H. H. Olsson, and J. Bosch, “Towards MLOps: A Framework and Maturity Model,” in 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), Sep. 2021, pp. 1–8, doi: 10.1109/SEAA53835.2021.00050.

[25] L. Feng et al., “MLU-Net: A Multi-Level Lightweight U-Net for Medical Image Segmentation Integrating Frequency Representation and MLP-Based Methods,” IEEE Access, vol. 12, pp. 20734–20751, 2024, doi: 10.1109/ACCESS.2024.3360889.

[26] A. M. Dostovalova, A. K. Gorshenin, J. V. Starichkova, and K. M. Arzamasov, “Comparative analysis of modifications of U-Net neuronal network architectures in medical image segmentation,” Digit. Diagnostics, vol. 5, no. 4, pp. 833–853, Nov. 2024, doi: 10.17816/DD629866.

[27] D. Yu et al., “Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent,” arXiv, pp. 1–22, Jun. 2022. [Online]. Available at: https://arxiv.org/pdf/2206.02617.

[28] M. Barry et al., “StreamMLOps: Operationalizing Online Learning for Big Data Streaming & Real-Time Applications,” in 2023 IEEE 39th International Conference on Data Engineering (ICDE), Apr. 2023, vol. 2023-April, pp. 3508–3521, doi: 10.1109/ICDE55515.2023.00272.

[29] D. Jha, M. A. Riegler, D. Johansen, P. Halvorsen, and H. D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” in 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Jul. 2020, vol. 2020-July, pp. 558–564, doi: 10.1109/CBMS49503.2020.00111.

[30] C. Zhang, X. Deng, and S. H. Ling, “Next-Gen Medical Imaging: U-Net Evolution and the Rise of Transformers,” Sensors, vol. 24, no. 14, p. 4668, Jul. 2024, doi: 10.3390/s24144668.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
 andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0