(2) Manjunath Ramanna Laman
(3) Lavanya Hegde
(4) Prathima Mahapurush
(5) Shivanandaswamy Mahapurush
*corresponding author
AbstractAccurate segmentation of cell nuclei in cervical cytology images is crucial for automated cervical cancer screening, yet existing methods struggle with blurred boundaries, noise-induced degradation, and topologically implausible predictions. The current research proposes Cell-Seg Tool, a novel triplet-branch diffusion AI tool that synergistically integrates three innovations to address these limitations. The Wavelet-Enhanced Contour Refinement Branch employs a learnable multi-scale discrete wavelet transform with adaptive coefficient attention to dynamically enhance boundary features across horizontal, vertical, and diagonal orientations. The Adaptive Spectral Noise Suppression module performs dual-domain processing using DCT-based filtering and uncertainty-guided fusion, coupled with bidirectional anchor semantic feedback to couple cross-branch information. The Topology-Aware Hybrid Loss integrates a focal Tversky loss, a persistent homology loss, a directional boundary loss, a skeleton completeness loss, and a diffusion-noise MSE loss for multi-objective optimization. Comprehensive experiments on multiple datasets demonstrate superior performance, achieving 94.45% Dice coefficient and 19.2% reduction in boundary localization error compared to state-of-the-art methods. Unlike prior work that applies these techniques independently, this work demonstrates that their adaptive, synergistic integration within a diffusion-based framework yields substantial improvements in boundary accuracy and topological correctness.
KeywordsAdaptive wavelet-spectral; Cervical cells; Cervical cancer; Cell boundary denoising; Segmentation enhancement
|
DOIhttps://doi.org/10.26555/ijain.v12i1.2267 |
Article metricsAbstract views : 242 | PDF views : 63 |
Cite |
Full Text Download
|
References
[1] H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA. Cancer J. Clin., vol. 71, no. 3, pp. 209–249, May 2021, doi: 10.3322/caac.21660.
[2] L. Mukku and J. Thomas, “Deep learning-based cervical lesion segmentation in colposcopic images,” Appl. Eng. Technol., vol. 3, no. 1, pp. 16–25, Apr. 2024, doi: 10.31763/aet.v3i1.1345.
[3] L. H. Ellenson and T.-C. Wu, “Focus on endometrial and cervical cancer,” Cancer Cell, vol. 5, no. 6, pp. 533–538, Jun. 2004, doi: 10.1016/j.ccr.2004.05.029.
[4] WHO, “WHO guideline for screening and treatment of cervical pre-cancer lesions for cervical cancer prevention : use of dual-stain cytology to triage women after a positive test for human papillomavirus (HPV),” p. 51, 2024. [online]. Available at: https://www.who.int/publications/i/item/9789240091658.
[5] L. Mukku and J. Thomas, “TelsNet: temporal lesion network embedding in a transformer model to detect cervical cancer through colposcope images,” Int. J. Adv. Intell. Informatics, vol. 9, no. 3, p. 502, Nov. 2023, doi: 10.26555/ijain.v9i3.1431.
[6] M. Lalasa and J. Thomas, “A Review of Deep Learning Methods in Cervical Cancer Detection,” in Lecture Notes in Networks and Systems, Springer, Cham, 2023, pp. 624–633, 2023, doi: 10.1007/978-3-031-27524-1_60.
[7] N. B and I. V, “Enhanced machine learning based feature subset through FFS enabled classification for cervical cancer diagnosis,” Int. J. Knowledge-based Intell. Eng. Syst., vol. 26, no. 1, pp. 79–89, Jun. 2022, doi: 10.3233/KES-220009.
[8] H. Tang, C. Song, and M. Qian, “Automatic segmentation algorithm for breast cell image based on multi-scale CNN and CSS corner detection,” Int. J. Knowledge-based Intell. Eng. Syst., vol. 24, no. 3, pp. 195–203, Sep. 2020, doi: 10.3233/KES-200041.
[9] A. Sahoo and S. Chandra, “Medical image segmentation schemes for the analysis of gynaecological malignancies,” Int. J. Knowledge-based Intell. Eng. Syst., vol. 17, no. 4, pp. 291–304, Nov. 2013, doi: 10.3233/KES-130279.
[10] K. Gong, K. Johnson, G. El Fakhri, Q. Li, and T. Pan, “PET image denoising based on denoising diffusion probabilistic model,” Eur. J. Nucl. Med. Mol. Imaging, vol. 51, no. 2, pp. 358–368, 2024, doi: 10.1007/s00259-023-06417-8.
[11] K. Chen et al., “Quantifying uncertainty: Air quality forecasting based on dynamic spatial-temporal denoising diffusion probabilistic model,” Environ. Res., vol. 249, p. 118438, 2024, doi: 10.1016/j.envres.2024.118438.
[12] J. Wu et al., “MedSegDiff: Medical Image Segmentation with Diffusion Probabilistic Model,” Proceedings of Machine Learning Research, vol. 227. PMLR, pp. 1623–1639, Jan. 2024. available at: https://proceedings.mlr.press/v227/wu24a.html.
[13] T. Chen, C. Wang, and H. Shan, “BerDiff: Conditional Bernoulli Diffusion Model for Medical Image Segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, Cham, Apr. 2023, pp. 491–501. doi: 10.1007/978-3-031-43901-8_47.
[14] C. H. Wong, “EnsDiff: Ensemble Precipitation Nowcasting with Diffusion,” Ensemble Precip. Nowcasting with Diffus., no. January, p. 14, 2025. Available at: Google Scholar.
[15] M. Xia et al., “Anatomically and Metabolically Informed Diffusion for Unified Denoising and Segmentation in Low-Count PET Imaging,” Med. Image Anal., vol. 107, p. 103831, Oct. 2025, https://doi.org/10.1016/j.media.2025.103831.
[16] M. Xia et al., “Multimodal Spatiotemporal Feature-Based Human Motion Pattern Recognition With CNN-Transformer-Attention Framework,” IEEE Internet Things J., vol. 12, no. 20, pp. 43883–43895, 2025, doi: 10.1109/JIOT.2025.3599403.
[17] A. Halder and D. Dey, “MorphAttnNet: An Attention-based morphology framework for lung cancer subtype classification,” Biomed. Signal Process. Control, vol. 86, p. 105149, 2023, doi: https://doi.org/10.1016/j.bspc.2023.105149.
[18] X. Fan, Y. Lu, B. Hu, Y. Shi, and B. Sun, “LW-MorphCNN: a lightweight morphological attention-based subtype classification network for lung cancer,” Meas. Sci. Technol., vol. 36, no. 1, p. 15703, 2025, doi: 10.1088/1361-6501/ad8a7c.
[19] B. Patnaik, D. S. K. Nayak, and S. Sahoo, “Attention enhanced hybrid deep learning model with 1D-CNN and BiLSTM for automated sleep apnea detection,” Discov. Appl. Sci., vol. 7, no. 12, p. 1376, 2025, doi: 10.1007/s42452-025-07639-1.
[20] H. A. Phoulady, M. Zhou, D. B. Goldgof, L. O. Hall, and P. R. Mouton, “Automatic quantification and classification of cervical cancer via Adaptive Nucleus Shape Modeling,” in 2016 IEEE International Conference on Image Processing (ICIP), IEEE, Sep. 2016, pp. 2658–2662. doi: 10.1109/ICIP.2016.7532841.
[21] A. Gençtav, S. Aksoy, and S. Önder, “Unsupervised segmentation and classification of cervical cell images,” Pattern Recognit., vol. 45, no. 12, pp. 4151–4168, Dec. 2012, doi: 10.1016/j.patcog.2012.05.006.
[22] M. E. Plissiti, C. Nikou, and A. Charchanti, “Automated detection of cell nuclei in pap smear images using morphological reconstruction and clustering.,” IEEE Trans. Inf. Technol. Biomed., vol. 15, no. 2, pp. 233–41, Mar. 2011, doi: 10.1109/TITB.2010.2087030.
[23] H. Chang, Y. Zhou, A. Borowsky, K. Barner, P. Spellman, and B. Parvin, “Stacked Predictive Sparse Decomposition for Classification of Histology Sections,” Int. J. Comput. Vis., vol. 113, no. 1, pp. 3–18, May 2015, doi: 10.1007/s11263-014-0790-9.
[24] T. Chankong, N. Theera-Umpon, and S. Auephanwiriyakul, “Automatic cervical cell segmentation and classification in Pap smears,” Comput. Methods Programs Biomed., vol. 113, no. 2, pp. 539–556, Feb. 2014, doi: 10.1016/j.cmpb.2013.12.012.
[25] Z. Xing et al., “Diff-UNet: A diffusion embedded network for robust 3D medical image segmentation,” Med. Image Anal., vol. 105, p. 103654, 2025, doi: 10.1016/j.media.2025.103654.
[26] A. Pratondo, C.-K. Chui, and S.-H. Ong, “Robust Edge-Stop Functions for Edge-Based Active Contour Models in Medical Image Segmentation,” IEEE Signal Process. Lett., vol. 23, no. 2, pp. 222–226, Feb. 2016, doi: 10.1109/LSP.2015.2508039.
[27] K. Wang, X. Zhang, X. Zhang, Y. Lu, S. Huang, and D. Yang, “EANet: Iterative edge attention network for medical image segmentation,” Pattern Recognit., vol. 127, no. July, p. 108636, Jul. 2022, doi: 10.1016/j.patcog.2022.108636.
[28] P. Kumar, “Diffusion Models and Generative Artificial Intelligence: Frameworks, Applications and Challenges: Pranjal Kumar,” Arch. Comput. Methods Eng., vol. 32, no. 7, pp. 4049–4092, 2025, https://doi.org/10.1007/s11831-025-10266-z.
[29] M. Zhang, J. Wu, Y. Ren, J. Yang, M. Li, and A. J. Ma, “Diffusionengine: Diffusion model is scalable data engine for object detection,” Pattern Recognit., vol. 171, p. 112141, 2026, doi: 10.1016/j.patcog.2025.112141.
[30] M. J. Ignacio, S. Shin, H. Jin, S. J. Yoo, D. Han, and Y.-G. Kim, “Revisiting U-Net: a foundational backbone for modern generative AI,” Artif. Intell. Rev., vol. 59, no. 45, pp. 1–52, 2026, doi: 10.1007/s10462-025-11450-0.
[31] S. Xu, B. Yang, R. Wang, D. Yang, J. Li, and J. Wei, “Single Tree Semantic Segmentation from UAV Images Based on Improved U-Net Network,” Drones, vol. 9, no. 4. p. 237, 2025, doi: 10.3390/drones9040237.
[32] B. H. Qsim, A. M. Khudhur, D. H. Kadir, and D. M. Saleh, “A Wavelet Shrinkage Mixed with a Single-level 2D Discrete Wavelet Transform for Image Denoising,” Kurdistan J. Appl. Res., vol. 9, no. 2, pp. 1–12, 2024, doi: 10.24017/science.2024.2.1.
[33] M. Uddin, Z. Fu, and X. Zhang, “Deepfake face detection via multi-level discrete wavelet transform and vision transformer,” Vis. Comput., vol. 41, no. 10, pp. 7049–7061, 2025, doi: 10.1007/s00371-024-03791-8.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

























Download