TUD-BISINDO: A new dataset and its recognition system using YOLO

(1) Muhammad Raihan Mail (Telkom University, Indonesia)
(2) Aulia Ayu Dyah Lestari Mail (Telkom University, Indonesia)
(3) * Suci Aulia Mail (Telkom University, Indonesia)
(4) Yuli Sun Hariyani Mail (Telkom University, Indonesia)
(5) Devira Anggi Maharani Mail (Politeknik Negeri Malang, Indonesia)
*corresponding author

Abstract


This study addresses the urgent need for digital inclusivity by developing a high-precision, real-time recognition system for Bahasa Isyarat Indonesia (BISINDO). The main new idea in this study is the creation of the Telkom University Database (TUD)-BISINDO, which is a strong and varied collection of data designed to fix the problems of current sign language databases, like not having enough different environments and camera angles. The TUD-BISINDO was created using 1,040 original images and added 780 more images to fix problems like differences in lighting, angles, and hand features that were often found in earlier datasets. The YOLOv8l model, improved with the AdamW optimizer and a flexible learning rate, performed exceptionally well with a mAP50 of 99.30% mAP50-95 of 85.40%, 99.80% precision, and 99.70% recall. These results demonstrate that the model significantly outperforms the previous YOLOv5 baseline across all primary metrics. The model has outstanding precision in recognizing real-time finger movements. However, complicated gestures, including the G and Z letters, require additional improvement. This research enhances sign language recognition technology, encouraging inclusion and improving accessibility for real-time communication. Future studies should focus on diversifying the dataset and maximizing performance in challenging conditions.

Keywords


Indonesia Sign Language; BISINDO; YOLOv8l; Real-time recognition; Deep Learning

   

DOI

https://doi.org/10.26555/ijain.v12i1.2329
      

Article metrics

Abstract views : 248 | PDF views : 37

   

Cite

   

Full Text

Download

References


[1] M. Sanaullah et al., “Sign Language to Sentence Formation: A Real Time Solution for Deaf People,” Comput. Mater. Contin., vol. 72, no. 2, pp. 2501–2519, Mar. 2022, doi: 10.32604/cmc.2022.021990.

[2] H. ZainEldin et al., “Silent no more: a comprehensive review of artificial intelligence, deep learning, and machine learning in facilitating deaf and mute communication,” Artif. Intell. Rev., vol. 57, no. 7, p. 188, Jun. 2024, doi: 10.1007/s10462-024-10816-0.

[3] Hanif Ridhotin Ulya and Sufyanto, “Analisis Komunikasi Organisasi Pengurus Pramuka DKC Sidoarjo dalam Melaksanakan Program Kerja Lomba Prestasi Penegak,” Reslaj Relig. Educ. Soc. Laa Roiba J., vol. 6, no. 5, pp. 2838-2852, Apr. 2024, doi: 10.47467/reslaj.v6i5.2134.

[4] H. Amnur, Y. Syanurdi, R. Idmayanti, and A. Erianda, “Developing Online Learning Applications for People with Hearing Impairment,” JOIV Int. J. Informatics Vis., vol. 5, no. 1, pp. 32–38, Mar. 2021, doi: 10.30630/joiv.5.1.457.

[5] Muhammad Randicha Hamandia and Maulidia, “Peningkatan Pemahaman mengenai Pendidikan Agama Islam pada Anak Penyandang Tunawicara melalui Penggunaan Bahasa Isyarat sebagai Komunikasi Nonverbal,” J-KIs J. Komun. Islam, vol. 3, no. 2, pp. 23–32, Dec. 2022, doi: 10.53429/j-kis.v3i2.545.

[6] B. Joksimoski et al., “Technological Solutions for Sign Language Recognition: A Scoping Review of Research Trends, Challenges, and Opportunities,” IEEE Access, vol. 10, pp. 40979–40998, 2022, doi: 10.1109/ACCESS.2022.3161440.

[7] O. D. Nurhayati, D. Eridani, and M. H. Tsalavin, “Sistem Isyarat Bahasa Indonesia (SIBI) Metode Convolutional Neural Network Sequential secara Real Time,” J. Teknol. Inf. dan Ilmu Komput., vol. 9, no. 4, pp. 819–828, Aug. 2022, doi: 10.25126/jtiik.2022944787.

[8] S. Apendi, C. Setianingsih, and M. W. Paryasto, “Deteksi Bahasa Isyarat Sistem Isyarat Bahasa Indonesia Menggunakan Metode Single Shot Multibox Detector,” eProceedings Eng., vol. 10, no. 1, p. 7, Mar. 2023, Accessed: Feb. 08, 2026. [Online]. Available at: https://openlibrarypublications.telkomuniversity.ac.id/index.php/engineering/article/view/19322

[9] D. Wang, M. Wang, Z. Zhang, T. Liu, C. Meng, and S. Guo, “Wearable Electronic Glove and Multilayer Para-LSTM-CNN-Based Method for Sign Language Recognition,” IEEE Internet Things J., vol. 11, no. 24, pp. 40787–40799, Dec. 2024, doi: 10.1109/JIOT.2024.3454215.

[10] Z. Wang et al., “Hear Sign Language: A Real-Time End-to-End Sign Language Recognition System,” IEEE Trans. Mob. Comput., vol. 21, no. 7, pp. 2398–2410, Jul. 2022, doi: 10.1109/TMC.2020.3038303.

[11] R. Soekarta, M. Yusuf, M. F. Hasa, and N. A. Basri, “IMPLEMENTASI DEEP LEARNING UNTUK DETEKSI JENIS OBAT MENGGUNAKAN ALGORITMA CNN BERBASIS WEBSITE,” JIKA (Jurnal Inform., vol. 7, no. 4, p. 455, Nov. 2023, doi: 10.31000/jika.v7i4.9751.

[12] M. K. Kotha and K. K. Pavan, “Deep Learning for Object Detection: A Survey,” Springer, Singapore, 2022, pp. 61–84. doi: 10.1007/978-981-19-4044-6_8.

[13] A. Munandar, Z. Yunizar, and S. Retno, “Indonesian Sign Language (BISINDO) Alphabet Detection System Using YOLO (You Only Look Once) Algorithm,” Proc. Malikussaleh Int. Conf. Multidiscip. Stud., vol. 4, p. 00001, Dec. 2024, doi: 10.29103/micoms.v4i.952.

[14] S. Daniels, N. Suciati, and C. Fathichah, “Indonesian Sign Language Recognition using YOLO Method,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1077, no. 1, p. 012029, Feb. 2021, doi: 10.1088/1757-899X/1077/1/012029.

[15] M. Hussain, “YOLO-v1 to YOLO-v8, the Rise of YOLO and Its Complementary Nature toward Digital Manufacturing and Industrial Defect Detection,” Machines, vol. 11, no. 7, p. 677, Jun. 2023, doi: 10.3390/machines11070677.

[16] R. K. A. Wibowo, A. Sanjaya, and U. Mahdiyah, “Implementasi YOLOv8 Pada Pengenalan Sistem Isyarat Bahasa Indonesia,” Pros. SEMNAS INOTEK (Seminar Nas. Inov. Teknol., vol. 8, no. 1, pp. 139–146, Jul. 2024. [Online]. Available at: https://proceeding.unpkediri.ac.id/index.php/inotek/article/view/4920.

[17] E. Daniel, V. Kathiresan, C. Priyadarshini, R. Golden Nancy, and P. Sindhu, “Real Time Sign Recognition using YOLOv8 Object Detection Algorithm for Malayalam Sign Language,” Fusion Pract. Appl., vol. 17, no. 1, pp. 135–145, 2025, doi: 10.54216/FPA.170110.

[18] W. Jia and C. Li, “SLR-YOLO: An improved YOLOv8 network for real-time sign language recognition,” J. Intell. Fuzzy Syst., vol. 46, no. 1, pp. 1663–1680, Jan. 2024, doi: 10.3233/JIFS-235132.

[19] J. Dong, Z. Xia, and Q. Zhao, “Augmented Reality Assisted Assembly Training Oriented Dynamic Gesture Recognition and Prediction,” Appl. Sci., vol. 11, no. 21, p. 9789, Oct. 2021, doi: 10.3390/app11219789.

[20] M. Agustin, I. Hermawan, D. Arnaldy, A. T. Muharram, and B. Warsuta, “Design of Livestream Video System and Classification of Rice Disease,” JOIV Int. J. Informatics Vis., vol. 7, no. 1, p. 139, Feb. 2023, doi: 10.30630/joiv.7.1.1336.

[21] S. X. Tan, J. Y. Ong, K. O. M. Goh, and C. Tee, “Boosting Vehicle Classification with Augmentation Techniques across Multiple YOLO Versions,” JOIV Int. J. Informatics Vis., vol. 8, no. 1, p. 45, Mar. 2024, doi: 10.62527/joiv.8.1.2313.

[22] G. Yu, T. Zhao, and B. Ren, “The Dead-reckoning Navigation Guidance Law Based on Neural Network Collaborative Forecasting,” https://doi.org/10.1142/S021821302350015X, vol. 32, no. 4, Jun. 2023, doi: 10.1142/S021821302350015X.

[23] D. LI et al., “TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for Sign Language Translation,” Adv. Neural Inf. Process. Syst., vol. 33, pp. 12034–12045, 2020, doi: 10.48550/arXiv.2010.05468.

[24] L. Pallahidu and J. A. Salas, “A Real-Time Hand Gesture Recognition System for Converting Sign Language to Alphabetic Character Using Deep Learning Approach,” in Brawijaya International Student Conference 2022, M. . Rachmad Andri Atmoko, S.ST. and M. K. Dr. Yati Sri Hayati, S.Kp., Eds., Faculty of Vocational Studies, Universitas Brawijaya, 2023, p. 250. Accessed: Feb. 08, 2026. [Online]. Available at: https://www.researchgate.net/publication/369089964_A_Real-Time_Hand_Gesture_Recognition_System_for_Converting_Sign_Language_to_Alphabetic_Character_Using_Deep_Learning_Approach.

[25] G. O. Kindy, G. Leonali, and H. Lucky, “Word-Level BISINDO: A Novel Video Indonesian Sign Language Dataset and Baseline Methods,” Procedia Comput. Sci., vol. 269, no. 23, pp. 249–258, Jan. 2025, doi: 10.1016/j.procs.2025.08.277.

[26] L. N. Fitri and M. Abduh, “Strategi Inovatif Guru dalam Membantu Anak Tuna Wicara Belajar dan Berkomunikasi di Sekolah Dasar,” Didakt. J K, vol. 13, no. 3, pp. 3847–3860, 2024, [Online]. Available at : https://jurnaldidaktika.org.

[27] S. Isnaniah, T. Agustina, Islahuddin, and F. Annisa, “The Use of Sign Language in Deaf Indonesian Classrooms in Surakarta,” KEMBARA J. Sci. Lang. Lit. Teach., vol. 9, no. 2, pp. 468–481, Oct. 2023, doi: 10.22219/kembara.v9i2.25990.

[28] N. A. Yardi, S. T. Guntoro, and M. Kom, “Survei Algoritma Pemrosesan Bahasa Pada Bisindo,” SEMASTER Semin. Nas. Teknol. Inf. Ilmu Komput., vol. 2, no. 1, pp. 255–264, Dec. 2023, Accessed: Feb. 10, 2026. [Online]. Available at : https://journal.unilak.ac.id/index.php/Semaster/article/view/18562

[29] M. Kotthapalli, D. Ravipati, and R. Bhatia, “YOLOv1 to YOLOv11: A Comprehensive Survey of Real-Time Object Detection Innovations and Challenges,” A Compr. Surv. Real-Time Object Detect. Innov. andChallenges, Aug. 2025, doi: 10.48550/arXiv.2508.02067.

[30] Iqra and K. J. Giri, “SO-YOLOv8: A novel deep learning-based approach for small object detection with YOLO beyond COCO,” Expert Syst. Appl., vol. 280, p. 127447, Jun. 2025, doi: 10.1016/j.eswa.2025.127447.

[31] M. Yaseen, “What is YOLOv8: An In-Depth Exploration of the Internal Features of the Next-Generation Object Detector,” no. Agustus, p. 10, Aug. 2024, doi: 10.48550/arXiv.2408.15857.

[32] R. Sapkota et al., “YOLO advances to its genesis: a decadal and comprehensive review of the You Only Look Once (YOLO) series,” Artif. Intell. Rev., vol. 58, no. 9, p. 274, Jun. 2025, doi: 10.1007/s10462-025-11253-3.

[33] N. A. Megantara and E. Utami, “Object Detection using YOLOv8 : A Systematic Review,” Sist. J. Sist. Inf., vol. 14, no. 3, pp. 1186–1193, May 2025, doi: 10.32520/stmsi.v14i3.5081.

[34] B. Xiao, M. Nguyen, and W. Q. Yan, “Fruit ripeness identification using YOLOv8 model,” Multimed. Tools Appl. 2023 839, vol. 83, no. 9, pp. 28039–28056, Aug. 2023, doi: 10.1007/s11042-023-16570-9.

[35] G. Park, V. K. Chandrasegar, and J. Koh, “Accuracy Enhancement of Hand Gesture Recognition Using CNN,” IEEE Access, vol. 11, pp. 26496–26501, 2023, doi: 10.1109/ACCESS.2023.3254537.

[36] İ. Ünal and O. Eceoğlu, “A Lightweight Instance Segmentation Model for Simultaneous Detection of Citrus Fruit Ripeness and Red Scale (Aonidiella aurantii) Pest Damage,” Appl. Sci., vol. 15, no. 17, p. 9742, Sep. 2025, doi: 10.3390/app15179742.

[37] A. Kurniawan and D. M. Wonohadidjojo, “Sistem Deteksi dan Klasifikasi Truk Air Menggunakan YOLO v5 dan EfficientNet-B4,” J. Intell. Syst. Comput., vol. 5, no. 2, pp. 115–122, Oct. 2023, doi: 10.52985/insyst.v5i2.356.

[38] H. J. Bhuiyan, M. F. Mozumder, M. R. I. Khan, M. S. Ahmed, and N. Z. Nahim, “Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition & Translation,” in 2025 11th International Conference on Computing and Artificial Intelligence (ICCAI), IEEE, Mar. 2025, pp. 168–174. doi: 10.1109/ICCAI66501.2025.00035.

[39] Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, “Object Detection in 20 Years: A Survey,” Proc. IEEE, vol. 111, no. 3, pp. 257–276, Mar. 2023, doi: 10.1109/JPROC.2023.3238524.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
 andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0