(2) Cahyo Adhi Hartanto (Department of Informatics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia)
(3) Panji Wisnu Wirawan (Department of Informatics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia)
*corresponding author
AbstractThe latest developments in the smartphone-based skin cancer diagnosis application allow simple ways for portable melanoma risk assessment and diagnosis for early skin cancer detection. Due to the trade-off problem (time complexity and error rate) on using a smartphone to run a machine learning algorithm for image analysis, most of the skin cancer diagnosis apps execute the image analysis on the server. In this study, we investigate the performance of skin cancer images detection and classification on android devices using the MobileNet v2 deep learning model. We compare the performance of several aspects; object detection and classification method, computer and android based image analysis, image acquisition method, and setting parameter. Skin cancer actinic Keratosis and Melanoma are used to test the performance of the proposed method. Accuracy, sensitivity, specificity, and running time of the testing methods are used for the measurement. Based on the experiment results, the best parameter for the MobileNet v2 model on android using images from the smartphone camera produces 95% accuracy for object detection and 70% accuracy for classification. The performance of the android app for object detection and classification model was feasible for the skin cancer analysis. Android-based image analysis remains within the threshold of computing time that denotes convenience for the user and has the same performance accuracy with the computer for the high-quality images. These findings motivated the development of disease detection processing on android using a smartphone camera, which aims to achieve real-time detection and classification with high accuracy.
KeywordsDeep learning; Skin cancer; Android; MobileNet v2
|
DOIhttps://doi.org/10.26555/ijain.v6i2.492 |
Article metricsAbstract views : 2537 | PDF views : 494 |
Cite |
Full TextDownload |
References
[1] M. Heron and R. N. Anderson, “Changes in the leading cause of death: Recent patterns in heart disease and cancer mortality,” Natl. Cent. Heal. Stat. Data Br., no. 254, 2017, available at: Google Scholar.
[2] M. C. F. Simões, J. J. S. Sousa, and A. A. C. C. Pais, “Skin cancer and new treatment perspectives: A review,” Cancer Lett., vol. 357, no. 1, pp. 8–42, 2015, doi: 10.1016/j.canlet.2014.11.001.
[3] World Health Organization, "INTERSUN: The Global UV Project. A guide and compendium 2003", available at: https://apps.who.int/iris/bitstream/handle/10665/42814/9241591056.pdf?sequence=1, (accessed 30 Jul. 2019 ).
[4] A. Bourouis, A. Zerdazi, M. Feham, and A. Bouchachia, “M-health: skin disease analysis system using Smartphone’s camera,” Procedia Comput. Sci., vol. 19, pp. 1116–1120, 2013, doi: 10.1016/j.procs.2013.06.157.
[5] G. Cao, X. Xie, W. Yang, Q. Liao, G. Shi, and J. Wu, “Feature-fused SSD: Fast detection for small objects,” in Ninth International Conference on Graphic and Image Processing (ICGIP 2017), 2018, vol. 10615, p. 106151E, available at: Google Scholar.
[6] Y. Yang, Z. Ye, Y. Su, Q. Zhao, X. Li, and D. Ouyang, “Deep learning for in vitro prediction of pharmaceutical formulations,” Acta Pharm. Sin. B, vol. 9, no. 1, pp. 177–185, 2019, doi: 10.1016/j.apsb.2018.09.010.
[7] H. Chang, “Skin cancer reorganization and classification with deep neural network,” arXiv Prepr. arXiv1703.00534, 2017, available at: Google Scholar.
[8] H. A. Haenssle et al., “Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists,” Ann. Oncol., vol. 29, no. 8, pp. 1836–1842, 2018, doi: 10.1093/annonc/mdy166.
[9] R. M. Gordon, “Skin cancer: more than skin deep,” Adv. Skin Wound Care, vol. 22, no. 12, pp. 574–580, 2009, doi: 10.1097/01.ASW.0000363470.25740.a2.
[10] Y. Yuan, M. Chao, and Y.-C. Lo, “Automatic skin lesion segmentation using deep fully convolutional networks with jaccard distance,” IEEE Trans. Med. Imaging, vol. 36, no. 9, pp. 1876–1886, 2017, doi: 10.1109/TMI.2017.2695227.
[11] A. A. Zaidan et al., “A review on smartphone skin cancer diagnosis apps in evaluation and benchmarking: coherent taxonomy, open issues and recommendation pathway solution,” Health Technol. (Berl)., vol. 8, no. 4, pp. 223–238, 2018, doi: 10.1007/s12553-018-0223-9.
[12] SkinVision B.V., “SkinVision - Detect Skin Cancer.”, available at: Google Play Store, (accessed Jul. 01, 2019).
[13] M. Cristian, “SpotMole.” available at: Google Play Store. (accessed Jul. 01, 2019).
[14] C. P. Koirala, “Deep Learning for Melanoma.”, CJ63, available at: Google Play Store. (accessed Jul. 01, 2019).
[15] GeniaLabs, “DermIA.”, available at: Google Play Store. (accessed Jul. 01, 2019).
[16] T. Maier et al., “Accuracy of a smartphone application using fractal image analysis of pigmented moles compared to clinical diagnosis and histological result,” J. Eur. Acad. Dermatology Venereol., vol. 29, no. 4, pp. 663–667, 2015, doi: 10.1111/jdv.12648 .
[17] M. Thissen, A. Udrea, M. Hacking, T. von Braunmuehl, and T. Ruzicka, “mHealth app for risk assessment of pigmented and nonpigmented skin lesions—a study on sensitivity and specificity in detecting malignancy,” Telemed. e-Health, vol. 23, no. 12, pp. 948–954, 2017, doi: 10.1089/tmj.2016.0259.
[18] J. Jaworek-Korjakowska and P. Kleczek, “Eskin: study on the smartphone application for early detection of malignant melanoma,” Wirel. Commun. Mob. Comput., vol. 2018, 2018, doi: 10.1155/2018/5767360.
[19] N. Z. Tajeddin and B. M. Asl, “Melanoma recognition in dermoscopy images using lesion’s peripheral region information,” Comput. Methods Programs Biomed., vol. 163, pp. 143–153, 2018, doi: 10.1016/j.cmpb.2018.05.005.
[20] A. G. Howard et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv Prepr. arXiv1704.04861, 2017, available at: Google Scholar.
[21] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
[22] L. N. Smith, “A disciplined approach to neural network hyper-parameters: Part 1--learning rate, batch size, momentum, and weight decay,” arXiv Prepr. arXiv1803.09820, 2018, available at: Google Scholar.
[23] W. Jerjes, Z. Hamdoon, A. A. Abdulkareem, and C. Hopper, “Photodynamic therapy in the management of actinic keratosis: Retrospective evaluation of outcome,” Photodiagnosis Photodyn. Ther., vol. 17, pp. 200–204, 2017, doi: 10.1016/j.pdpdt.2016.04.017.
[24] M. Rastrelli, S. Tropea, C. R. Rossi, and M. Alaibac, “Melanoma: epidemiology, risk factors, pathogenesis, diagnosis and classification,” In Vivo (Brooklyn)., vol. 28, no. 6, pp. 1005–1011, 2014, available at: Google Scholar.
[25] B. Singh, D. Toshniwal, and S. K. Allur, “Shunt connection: An intelligent skipping of contiguous blocks for optimizing MobileNet-V2,” Neural Networks, vol. 118, pp. 192–203, 2019, doi: 10.1016/j.neunet.2019.06.006.
[26] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv Prepr. arXiv1502.03167, 2015, available at: Google Scholar.
[27] A. F. Agarap, “Deep learning using rectified linear units (relu),” arXiv Prepr. arXiv1803.08375, 2018, available at: Google Scholar.
[28] B. Szal, “On the uniform convergence of weighted trigonometric series,” Banach Cent. Publ., vol. 1, no. 92, pp. 339–350, 2011, doi: 10.4064/bc92-0-23.
[29] Y. Makihara, M. Takizawa, Y. Shirai, and N. Shimada, “Object recognition under various lighting conditions,” in Scandinavian Conference on Image Analysis, 2003, pp. 899–906, doi: 10.1007/3-540-45103-X_119.
[30] M. Liu, Z. Zhou, P. Shang, and D. Xu, “Fuzzified image enhancement for deep learning in iris recognition,” IEEE Trans. Fuzzy Syst., 2019, doi: 10.1109/TFUZZ.2019.2912576.
[31] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., vol. 15, no. 1, pp. 1929–1958, 2014, available at: https://dl.acm.org/doi/abs/10.5555/2627435.2670313.
[32] S. De, A. Mukherjee, and E. Ullah, “Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration,” arXiv Prepr. arXiv1807.06766, 2018., available at: Google Scholar
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0