(2) * Anik Nur Handayani
(3) Heru Wahyu Herwanto
(4) Kohei Arai
*corresponding author
AbstractThis research aims to develop an object detection model that can distinguish between the gait of people with and without disabilities with high accuracy. Object detection is currently designed to detect people and is used in both normal and gender-based gait recognition. Gait recognition, if further examined, encompasses recognition of both non-disabled and disabled individuals. Every day, people walk like most, but people with disabilities have different gaits from those of normal people. Some use walking aids, whereas others walk without them. YOLOv8 is a platform for detecting people. This research proposes an object detection for normal people and people with disabilities, both those who use assistive devices and those who do not. The dataset used is Disabled gait, comprising 6500 images, and will be divided into 3 data splits: 70% for training, 20% for validation, and 10% for testing. Model evaluation is based on precision, recall, mAP50, and mAP50-90. The test results for three classifications, namely assistive, non-assistive, and normal, show the highest value in the assistive class with an mAP50 value of 0.98 and an mAP50-95 value of 0.996. This study advances gait recognition by extending object detection to accurately differentiate normal and disabled walking patterns, including both assistive and non-assistive gaits, thereby enriching inclusive human-movement analysis. Beyond computer vision, the findings benefit healthcare, rehabilitation, and smart surveillance systems by enabling more accurate mobility assessment and accessibility-aware applications.
KeywordsOptimized, Computer_Vision, Identify, Disabled, Yolov8
|
DOIhttps://doi.org/10.26555/ijain.v11i4.1977 |
Article metricsAbstract views : 603 | PDF views : 39 |
Cite |
Full Text Download
|
References
[1] R. Bennett and R. Vijaygopal, “Exploring mobility and transportation technology futures for people with ambulatory disabilities: A science fiction prototype,” Technovation, vol. 133, p. 103001, 2024, doi: 10.1016/j.technovation.2024.103001.
[2] M. Yang, S. Fraser, and T. O’Sullivan, “‘I’ve already lived like There’s a pandemic’: A grounded theory study on the experiences of people with a mobility disability,” International Journal of Disaster Risk Reduction, vol. 99, p. 104116, 2023, doi: 10.1016/j.ijdrr.2023.104116.
[3] A. Neven and W. Ectors, “‘I am dependent on others to get there’: Mobility barriers and solutions for societal participation by persons with disabilities,” Travel Behav Soc, vol. 30, pp. 302–311, 2023, doi: 10.1016/j.tbs.2022.10.009.
[4] M. Yang, S. Fraser, and T. O’Sullivan, “‘I’ve already lived like There’s a pandemic’: A grounded theory study on the experiences of people with a mobility disability,” International Journal of Disaster Risk Reduction, vol. 99, p. 104116, 2023, doi: 10.1016/j.ijdrr.2023.104116.
[5] S. Demiröz Yıldırım, “Integrated disaster management experience of people with disabilities: A phenomenological research on the experience of people with orthopedic disabilities in Türkiye,” International Journal of Disaster Risk Reduction, vol. 88, p. 103611, 2023, doi: 10.1016/j.ijdrr.2023.103611.
[6] C. L. A. Wender et al., “Rationale and methodology for examining the combination of aerobic exercise and cognitive rehabilitation on new learning and memory in persons with multiple sclerosis and mobility disability: Protocol for a randomized controlled trial,” Contemp Clin Trials, vol. 144, p. 107630, 2024, doi: 10.1016/j.cct.2024.107630.
[7] J. Heo et al., “A framework of transportation mode detection for people with mobility disability,” J Intell Transp Syst, 2024, doi: 10.1080/15472450.2024.2329901.
[8] L. Hu et al., “Application of gaming robot based on gait recognition algorithm in sports training and assistance system,” Entertain Comput, vol. 52, p. 100763, 2025, doi: 10.1016/j.entcom.2024.100763.
[9] X. Liu, Q. Li, S. Hou, M. Ren, X. Hu, and Y. Huang, “Depression risk recognition based on gait: A benchmark,” Neurocomputing, vol. 596, p. 128045, 2024, doi: 10.1016/j.neucom.2024.128045.
[10] B. Ali, M. Bukhari, M. Maqsood, J. Moon, E. Hwang, and S. Rho, “An end-to-end gait recognition system for covariate conditions using custom kernel CNN,” Heliyon, vol. 10, no. 12, p. e32934, 2024, doi: 10.1016/j.heliyon.2024.e32934.
[11] S. M. H. Sithi Shameem Fathima, K. A. Jyotsna, T. Srinivasulu, K. Archana, M. Tulasi rama, and S. Ravichand, “Walking pattern analysis using GAIT cycles and silhouettes for clinical applications,” Measurement: Sensors, vol. 30, p. 100893, 2023, doi: 10.1016/j.measen.2023.100893.
[12] Y. Liu, X. Liu, Z. Wang, X. Yang, and X. Wang, “Improving performance of human action intent recognition: Analysis of gait recognition machine learning algorithms and optimal combination with inertial measurement units,” Comput Biol Med, vol. 163, p. 107192, 2023, doi: 10.1016/j.compbiomed.2023.107192.
[13] A. Parashar, A. Parashar, A. F. Abate, R. S. Shekhawat, and I. Rida, “Real-time gait biometrics for surveillance applications: A review,” Image Vis Comput, vol. 138, p. 104784, 2023, doi: 10.1016/j.imavis.2023.104784.
[14] Y. Liu, C. Wang, H. Li, and Y. Zhou, “Gait recognition of camouflaged people based on UAV infrared imaging,” Infrared Phys Technol, vol. 138, p. 105262, 2024, doi: 10.1016/j.infrared.2024.105262.
[15] Z. He, W. Wang, J. Dong, and T. Tan, “Temporal sparse adversarial attack on sequence-based gait recognition,” Pattern Recognit, vol. 133, p. 109028, 2023, doi: 10.1016/j.patcog.2022.109028.
[16] X. Liu, Q. Li, S. Hou, M. Ren, X. Hu, and Y. Huang, “Depression risk recognition based on gait: A benchmark,” Neurocomputing, vol. 596, p. 128045, 2024, doi: 10.1016/j.neucom.2024.128045.
[17] D. Guo et al., “Degradable, biocompatible, and flexible capacitive pressure sensor for intelligent gait recognition and rehabilitation training,” Nano Energy, vol. 127, p. 109750, 2024, doi: 10.1016/j.nanoen.2024.109750.
[18] T. Zanotto et al., “Variability of objective gait measures across the expanded disability status scale in people living with multiple sclerosis: A cross-sectional retrospective analysis,” Mult Scler Relat Disord, vol. 59, p. 103645, 2022, doi: 10.1016/j.msard.2022.103645.
[19] T. N. Bryce, Huang Vincent, and M. X. Escalon, “49 - Spinal Cord Injury,” in Braddom’s Physical Medicine and Rehabilitation (Sixth Edition), D. X. Cifu, Ed., Philadelphia: Elsevier, 2021, pp. 1049-1100.e6. doi: 10.1016/B978-0-323-62539-5.00049-7.
[20] Y. MORI, K. E. N. MAEJIMA, K. INOUE, N. SHIROMA, and Y. FUKUOKA, “ABLE: A STANDING STYLE TRANSFER SYSTEM FOR A PERSON WITH DISABLED LOWER LIMBS,” in Emerging Trends in Mobile Robotics, WORLD SCIENTIFIC, 2010, pp. 1071–1078. doi: 10.1142/9789814329927_0131.
[21] M. Pau et al., “Inter-joint coordination during gait in people with multiple sclerosis: A focus on the effect of disability,” Mult Scler Relat Disord, vol. 60, p. 103741, 2022, doi: 10.1016/j.msard.2022.103741.
[22] S.-C. Huang et al., “The Danger of Walking with Socks: Evidence from Kinematic Analysis in People with Progressive Multiple Sclerosis,” Sensors, vol. 20, no. 21, 2020, doi: 10.3390/s20216160.
[23] L. Qin, M. Guo, K. Zhou, X. Chen, and J. Qiu, “Gait recognition using deep learning with handling defective data from multiple wearable sensors,” Digit Signal Process, vol. 154, p. 104665, 2024, doi: 10.1016/j.dsp.2024.104665.
[24] R. A. Asmara et al., “Comparative Study of Gait Gender Identification using Gait Energy Image (GEI) and Gait Information Image (GII),” in MATEC Web of Conferences, EDP Sciences, Sep. 2018. doi: 10.1051/matecconf/201819715006.
[25] A. Parashar, A. Parashar, M. Shabaz, D. Gupta, A. K. Sahu, and M. A. Khan, “Advancements in artificial intelligence for biometrics: A deep dive into model-based gait recognition techniques,” Eng Appl Artif Intell, vol. 130, p. 107712, 2024, doi: 10.1016/j.engappai.2023.107712.
[26] J. Mao, L. Wang, N. Wang, Y. Hu, and W. Sheng, “A novel method of human identification based on dental impression image,” Pattern Recognit, vol. 144, p. 109864, 2023, doi: 10.1016/j.patcog.2023.109864.
[27] R. A. Asmara, B. Syahputro, D. Supriyanto, and A. N. Handayani, “Prediction of traffic density using yolo object detection and implemented in raspberry pi 3b + and intel ncs 2,” in 4th International Conference on Vocational Education and Training, ICOVET 2020, Institute of Electrical and Electronics Engineers Inc., Sep. 2020, pp. 391–395. doi: 10.1109/ICOVET50258.2020.9230145.
[28] R. Sapkota, D. Ahmed, and M. Karkee, “Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments,” Artificial Intelligence in Agriculture, 2024, doi: 10.1016/j.aiia.2024.07.001.
[29] J. Terven and D. Cordova-Esparza, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Apr. 2023, doi: 10.3390/make5040083.
[30] A. Aboah, B. Wang, U. Bagci, and Y. Adu-Gyamfi, “Real-time Multi-Class Helmet Violation Detection Using Few-Shot Data Sampling Technique and YOLOv8,” Apr. 2023, [Online]. Available: http://arxiv.org/abs/2304.08256
[31] Y. Zhang, Y. Lu, W. Zhu, X. Wei, and Z. Wei, “Traffic sign detection based on multi-scale feature extraction and cascade feature fusion,” J Supercomput, vol. 79, no. 2, pp. 2137–2152, 2023, doi: 10.1007/s11227-022-04670-6.
[32] C. Zhang, X. Chen, P. Liu, B. He, W. Li, and T. Song, “Automated detection and segmentation of tunnel defects and objects using YOLOv8-CM,” Tunnelling and Underground Space Technology, vol. 150, p. 105857, 2024, doi: 10.1016/j.tust.2024.105857.
[33] R. Sapkota, D. Ahmed, and M. Karkee, “Comparing YOLOv8 and Mask R-CNN for instance segmentation in complex orchard environments,” Artificial Intelligence in Agriculture, vol. 13, pp. 84–99, 2024, doi: 10.1016/j.aiia.2024.07.001.
[34] T. Zanotto et al., “Variability of objective gait measures across the expanded disability status scale in people living with multiple sclerosis: A cross-sectional retrospective analysis,” Mult Scler Relat Disord, vol. 59, p. 103645, 2022, doi: 10.1016/j.msard.2022.103645.
[35] M. H. Khan, M. S. Farid, and M. Grzegorzek, “Vision-based approaches towards person identification using gait,” Comput Sci Rev, vol. 42, p. 100432, 2021, doi: 10.1016/j.cosrev.2021.100432.
[36] D. Xu et al., “A new method proposed for realizing human gait pattern recognition: Inspirations for the application of sports and clinical gait analysis,” Gait Posture, vol. 107, pp. 293–305, 2024, doi: 10.1016/j.gaitpost.2023.10.019.
[37] D. Zhang, Z. Xu, Z. Wang, H. Cai, J. Wang, and K. Li, “Machine-learning-assisted wearable PVA/Acrylic fluorescent layer-based triboelectric sensor for motion, gait and individual recognition,” Chemical Engineering Journal, vol. 478, p. 147075, 2023, doi: 10.1016/j.cej.2023.147075.
[38] S. Manz, D. Seifert, B. Altenburg, T. Schmalz, S. Dosen, and J. Gonzalez-Vargas, “Using embedded prosthesis sensors for clinical gait analyses in people with lower limb amputation: A feasibility study,” Clinical Biomechanics, vol. 106, p. 105988, 2023, doi: 10.1016/j.clinbiomech.2023.105988.
[39] S. Kansal, D. Garg, A. Upadhyay, S. Mittal, and G. S. Talwar, “DL-AMPUT-EEG: Design and development of the low-cost prosthesis for rehabilitation of upper limb amputees using deep-learning-based techniques,” Eng Appl Artif Intell, vol. 126, p. 106990, 2023, doi: 10.1016/j.engappai.2023.106990.
[40] H. Yu, A. Nelson, Z. Huang, and M. S. Erden, “Purpose-centered design of rehabilitation robots: a case study of a hand exoskeleton for assessing spasticity,” Procedia CIRP, vol. 128, pp. 227–232, 2024, doi: 10.1016/j.procir.2024.06.019.
[41] A. Q. AL-DUJAILI, A. F. Hasan, A. J. Humaidi, and A. Al-Jodah, “Anti-disturbance control design of Exoskeleton Knee robotic system for rehabilitative care,” Heliyon, vol. 10, no. 9, p. e28911, 2024, doi: 10.1016/j.heliyon.2024.e28911.
[42] A. Ghaffar et al., “Efficiency, optimality, and selection in a rigid actuation system with matching capabilities for an assistive robotic exoskeleton,” Engineering Science and Technology, an International Journal, vol. 51, p. 101613, 2024, doi: 10.1016/j.jestch.2023.101613.
[43] F. Utaminingrum, A. W. S. B. Johan, I. K. Somawirata, T. K. Shih, and C.-Y. Lin, “Indoor staircase detection for supporting security systems in autonomous smart wheelchairs based on deep analysis of the Co-occurrence Matrix and Binary Classification,” Intelligent Systems with Applications, vol. 23, p. 200405, 2024, doi: 10.1016/j.iswa.2024.200405.
[44] R. Wulanningrum, A. N. Handayani, and H. W. Herwanto, “DissabledGait: Gait Dataset of Normal People and People with Disabilities,” Mendeley Data. Accessed: May 25, 2024. [Online]. Available: https://data.mendeley.com/datasets/v6hy35ydch/1
[45] L. Zhao et al., “YOLOv8-QR: An improved YOLOv8 model via attention mechanism for object detection of QR code defects,” Computers and Electrical Engineering, vol. 118, p. 109376, 2024, doi: 10.1016/j.compeleceng.2024.109376.
[46] R. A. Asmara et al., “YOLO-based object detection performance evaluation for automatic target aimbot in first-person shooter games,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 4, pp. 2456–2470, Aug. 2024, doi: 10.11591/eei.v13i4.6895.
[47] M. Bakirci, “Enhancing vehicle detection in intelligent transportation systems via autonomous UAV platform and YOLOv8 integration,” Appl Soft Comput, vol. 164, p. 112015, 2024, doi: 10.1016/j.asoc.2024.112015.
[48] M. Bakirci, “Utilizing YOLOv8 for enhanced traffic monitoring in intelligent transportation systems (ITS) applications,” Digit Signal Process, vol. 152, p. 104594, 2024, doi: 10.1016/j.dsp.2024.104594.
[49] S. Sun, B. Mo, J. Xu, D. Li, J. Zhao, and S. Han, “Multi-YOLOv8: An infrared moving small object detection model based on YOLOv8 for air vehicle,” Neurocomputing, vol. 588, p. 127685, 2024, doi: 10.1016/j.neucom.2024.127685.
[50] H. Min et al., “Automatic classification of distal radius fracture using a two-stage ensemble deep learning framework,” Phys Eng Sci Med, vol. 46, no. 2, pp. 877–886, Jun. 2023, doi: 10.1007/s13246-023-01261-4.
[51] C.-J. Zhang et al., “Evaluation of the YOLO models for discrimination of the alfalfa pollinating bee species,” J Asia Pac Entomol, vol. 27, no. 1, p. 102195, 2024, doi: 10.1016/j.aspen.2023.102195.
[52] Q. Li, W. Ma, H. Li, X. Zhang, R. Zhang, and W. Zhou, “Cotton-YOLO: Improved YOLOV7 for rapid detection of foreign fibers in seed cotton,” Comput Electron Agric, vol. 219, p. 108752, 2024, doi: 10.1016/j.compag.2024.108752.
[53] Y. Peng et al., “A dynamic individual method for yak heifer live body weight estimation using the YOLOv8 network and body parameter detection algorithm,” J Dairy Sci, vol. 107, no. 8, pp. 6178–6191, 2024, doi: 10.3168/jds.2023-24065.
[54] C. Xiong, T. Zayed, and E. M. Abdelkader, “A novel YOLOv8-GAM-Wise-IoU model for automated detection of bridge surface cracks,” Constr Build Mater, vol. 414, p. 135025, 2024, doi: 10.1016/j.conbuildmat.2024.135025.
[55] R. A. Asmara et al., “YOLO-based object detection performance evaluation for automatic target aimbot in first-person shooter games,” Bulletin of Electrical Engineering and Informatics, vol. 13, no. 4, pp. 2456–2470, Aug. 2024, doi: 10.11591/eei.v13i4.6895.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

























Download