Augmented haar cascade classifier for real-time ball detection in humanoid robots under dynamic environments

(1) Gembong Edhi Setyawan Mail (Universitas Brawijaya, Indonesia)
(2) * Edita Rosana Widasari Mail (Universitas Brawijaya, Indonesia)
(3) Barlian Henryranu Prasetio Mail (Universitas Brawijaya, Indonesia)
(4) Yasa Palaguna Umar Mail (Universitas Brawijaya, Indonesia)
(5) Ivan Rafli Adipratama Mail (Universitas Brawijaya, Indonesia)
*corresponding author

Abstract


This study proposes an Augmented Haar Cascade Classifier (AHCC) to enhance real-time ball detection for humanoid robots operating in dynamic environments. The method integrates Convex Hull mapping, HSV-based segmentation, and Hough Circle validation to overcome challenges such as fluctuating illumination, complex backgrounds, and partial occlusions. Experiments were conducted entirely on a CPU-only Intel NUC platform running ROS without GPU acceleration, using a dataset containing variations in lighting, orientation, scale, and background clutter. Compared with baseline models (standard Haar Cascade Classifier (HCC) and YOLOv5) the proposed AHCC achieved 97% accuracy, 83% recall, 97% precision, and an 89% F1-score, while requiring only 0.00849 s per frame with 8.97% memory usage. Although YOLOv5 reached 99% accuracy, it demanded higher computational resources (0.0344 s per frame, 22.3% memory usage), limiting its practicality for embedded robotic systems. The AHCC therefore offers an optimal balance between detection reliability and computational efficiency, outperforming traditional HCC and providing a lightweight alternative to GPU-dependent detectors such as Tiny-YOLO and MobileNet-SSD.

Keywords


Realtime detection; Humanoid robots; Augmented Haar Cascade Classifier; HSV segmentation; Convex Hull mapping; Dynamic Environments

   

DOI

https://doi.org/10.26555/ijain.v12i1.2146
      

Article metrics

Abstract views : 237 | PDF views : 21

   

Cite

   

Full Text

Download

References


[1] M. Farajtabar and M. Charbonneau, “The path towards contact-based physical human–robot interaction,” Rob. Auton. Syst., vol. 182, no. 3, p. 104829, Dec. 2024, doi: 10.1016/j.robot.2024.104829.

[2] Y. Tong, H. Liu, and Z. Zhang, “Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects,” IEEE/CAA J. Autom. Sin., vol. 11, no. 2, pp. 301–328, Feb. 2024, doi: 10.1109/JAS.2023.124140.

[3] L. Sørensen, D. T. Johannesen, and H. M. Johnsen, “Humanoid robots for assisting people with physical disabilities in activities of daily living: A scoping review,” Assist. Technol., vol. 37, no. 3, pp. 203–219, May 2025, doi: 10.1080/10400435.2024.2337194.

[4] C. S. Song and Y.-K. Kim, “The role of the human-robot interaction in consumers’ acceptance of humanoid retail service robots,” J. Bus. Res., vol. 146, pp. 489–503, Jul. 2022, doi: 10.1016/j.jbusres.2022.03.087.

[5] M. Andtfolk, L. Nyholm, H. Eide, and L. Fagerström, “Humanoid robots in the care of older persons: A scoping review,” Assist. Technol., vol. 34, no. 5, pp. 518–526, Sep. 2022, doi: 10.1080/10400435.2021.1880493.

[6] M. Cho et al., “Evaluating Human-Care Robot Services for the Elderly: An Experimental Study,” Int. J. Soc. Robot., vol. 16, no. 7, pp. 1561–1587, Jul. 2024, doi: 10.1007/s12369-024-01157-7.

[7] T. Abbas Shangari, S. Sadeghnejad, and J. Baltes, “Importance of Humanoid Robot Detection,” in Humanoid Robotics: A Reference, Dordrecht: Springer Netherlands, 2019, pp. 2463–2471, doi: 10.1007/978-94-007-6046-2_141.

[8] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001, vol. 1, pp. I-511-I–518, doi: 10.1109/CVPR.2001.990517.

[9] S. Gharge, A. Patil, S. Patel, V. Shetty, and N. Mundhada, “Real-time Object Detection using Haar Cascade Classifier for Robot Cars,” in 2023 4th International Conference on Electronics and Sustainable Communication Systems (ICESC), Jul. 2023, pp. 64–70, doi: 10.1109/ICESC57686.2023.10193401.

[10] M. L. Ali and Z. Zhang, “The YOLO Framework: A Comprehensive Review of Evolution, Applications, and Benchmarks in Object Detection,” Computers, vol. 13, no. 12, p. 336, Dec. 2024, doi: 10.3390/computers13120336.

[11] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 779–788, doi: 10.1109/CVPR.2016.91.

[12] J. Redmon and A. Farhadi, “YOLO9000: Better, Faster, Stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, vol. 2017-Janua, pp. 6517–6525, doi: 10.1109/CVPR.2017.690.

[13] J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” in Computer Vision and Pattern Recognition, Apr. 2018, pp. 1–6, Accessed: Jun. 12, 2023. [Online]. Available at: https://arxiv.org/abs/1804.02767v1.

[14] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” Comput. Vis. Pattern Recognit., pp. 1–17, Apr. 2020, Accessed: Jun. 06, 2023. [Online]. Available at: https://arxiv.org/abs/2004.10934v1.

[15] G. Jocher et al., “ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation.”. [Online]. Available at: https://zenodo.org/records/7347926.

[16] A. Vijayakumar and S. Vairavasundaram, “YOLO-based Object Detection Models: A Review and its Applications,” Multimed. Tools Appl., vol. 83, no. 35, pp. 83535–83574, Mar. 2024, doi: 10.1007/s11042-024-18872-y.

[17] A. Howard et al., “Searching for MobileNetV3,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2019, pp. 1314–1324, doi: 10.1109/ICCV.2019.00140.

[18] T. Li, Z. Pei, X. Liu, R. Nie, X. Li, and Y. Wang, “Low-Illumination Image Enhancement for Foreign Object Detection in Confined Spaces,” IEEE Trans. Instrum. Meas., vol. 72, pp. 1–10, 2023, doi: 10.1109/TIM.2023.3284141.

[19] H. Wang, H. Zhang, Z. Chen, J. Zhu, and Y. Zhang, “Influence of Artificial Intelligence and Robotics Awareness on Employee Creativity in the Hotel Industry,” Front. Psychol., vol. 13, p. 834160, Mar. 2022, doi: 10.3389/fpsyg.2022.834160.

[20] Y. Chen, B. Zhao, X. Jia, and T. Ma, “Efficient real-time object detection for embedded and industrial systems:The You Only Look Once-Compact approach,” Eng. Appl. Artif. Intell., vol. 156, p. 109927, Sep. 2025, doi: 10.1016/j.engappai.2024.109927.

[21] Dog-qiuqiu, “Yolo-FastestV2: based on YOLO’s low-power, ultra-lightweight universal target detection algorithm,” GitHub repository. Available at: https://github.com/dog-qiuqiu/Yolo-FastestV2. Accessed: Oct. 2025.

[22] X. Deng, R. Li, L. Zhao, K. Wang, and X. Gui, “Multi-obstacle path planning and optimization for mobile robot,” Expert Syst. Appl., vol. 183, p. 115445, Nov. 2021, doi: 10.1016/j.eswa.2021.115445.

[23] R. Hao, C. Xu, and C. Zhong, “Infrared Monocular Depth Estimation Based on Radiation Field Gradient Guidance and Semantic Priors in HSV Space,” Sensors, vol. 25, no. 13, p. 4022, Jun. 2025, doi: 10.3390/s25134022.

[24] T. Mahalingam and M. Subramoniam, “A robust single and multiple moving object detection, tracking and classification,” Appl. Comput. Informatics, vol. 17, no. 1, pp. 2–18, Jan. 2021, doi: 10.1016/j.aci.2018.01.001.

[25] P. Mittal, “A comprehensive survey of deep learning-based lightweight object detection models for edge devices,” Artif. Intell. Rev., vol. 57, no. 9, p. 242, Aug. 2024, doi: 10.1007/s10462-024-10877-1.

[26] G. Liu, Y. Hu, Z. Chen, J. Guo, and P. Ni, “Lightweight object detection algorithm for robots with improved YOLOv5,” Eng. Appl. Artif. Intell., vol. 123, p. 106217, Aug. 2023, doi: 10.1016/j.engappai.2023.106217.

[27] H. Kwon, S. Oh, and J.-W. Baek, “Algorithmic Efficiency in Convex Hull Computation: Insights from 2D and 3D Implementations,” Symmetry (Basel)., vol. 16, no. 12, p. 1590, Nov. 2024, doi: 10.3390/sym16121590.

[28] D. Singh, H. Sarkar, and L. N. Das, “A Mean Point Based Convex Hull Computation Algorithm,” Am. J. Eng. Res., vol. 5, no. 11, pp. 70–75, 2016, [Online]. Available at: https://www.ajer.org/papers/v5(11)/K0511070075.pdf.

[29] D. D. Burdescu, M. Brezovan, E. Ganea, and L. Stanescu, “A New Method for Segmentation of Images Represented in a HSV Color Space,” 2009, pp. 606–617, doi: 10.1007/978-3-642-04697-1_57.

[30] S. Sural, Gang Qian, and S. Pramanik, “Segmentation and histogram generation using the HSV color space for image retrieval,” in Proceedings. International Conference on Image Processing, vol. 2, pp. II-589-II–592, doi: 10.1109/ICIP.2002.1040019.

[31] A. O. Djekoune, K. Messaoudi, and K. Amara, “Incremental circle hough transform: An improved method for circle detection,” Optik (Stuttg)., vol. 133, pp. 17–31, Mar. 2017, doi: 10.1016/j.ijleo.2016.12.064.

[32] M. Rizon et al., “Object Detection using Circular Hough Transform,” Am. J. Appl. Sci., vol. 2, no. 12, pp. 1606–1609, Dec. 2005, doi: 10.3844/ajassp.2005.1606.1609.

[33] S. N. Shivappriya, S. . Pasupathy, R. Dhivyapraba, S. Jagadeeshan, R. Jai Krishna, and R. Nithin Krishna, “Object Detection and Measurement using Image Processing,” in 2023 Third International Conference on Smart Technologies, Communication and Robotics (STCR), Dec. 2023, pp. 1–6, doi: 10.1109/STCR59085.2023.10397044.

[34] A. F. T. Winfield and M. Jirotka, “Ethical governance is essential to building trust in robotics and artificial intelligence systems,” Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., vol. 376, no. 2133, p. 20180085, Nov. 2018, doi: 10.1098/rsta.2018.0085.

[35] W. Liu et al., “SSD: Single Shot MultiBox Detector,” 2016, pp. 21–37, doi: 10.1007/978-3-319-46448-0_2.

[36] S. M. Raza, S. M. H. Abidi, M. Masuduzzaman, and S. Y. Shin, “Lightweight deep learning for visual perception: A survey of models, compression strategies, and edge deployment challenges,” Neurocomputing, vol. 656, p. 131357, Dec. 2025, doi: 10.1016/j.neucom.2025.131357.

[37] E. Paula, J. Soni, H. Upadhyay, and L. Lagos, “Comparative analysis of model compression techniques for achieving carbon efficient AI,” Sci. Rep., vol. 15, no. 1, p. 23461, Jul. 2025, doi: 10.1038/s41598-025-07821-w.

[38] D. Liu et al., “A survey of model compression techniques: past, present, and future,” Front. Robot. AI, vol. 12, pp. 1–16, Mar. 2025, doi: 10.3389/frobt.2025.1518965.

[39] A. Amellal, I. Amellal, M. R. Ech-charrat, and H. Seghiouer, “Predictive optimization in automotive supply chains: a BiLSTM-Attention and reinforcement learning approach,” Int. J. Adv. Intell. Informatics, vol. 10, no. 3, p. 441, Aug. 2024, doi: 10.26555/ijain.v10i3.1351.

[40] V. Sharma, D. N. Mupenda, L. Thorvik, and D. Mishra, “Edge AI to Edge Robotics: Enhancing Human Pose Estimation with High-Performance TPU Computing,” in Communications in Computer and Information Science, vol. 2333 CCIS, Springer, Cham, 2025, pp. 433–447, doi: 10.1007/978-3-031-83783-8_25.

[41] D. J. Patel, P. S. Patel, T. J. Patel, M. D. Viradiya, J. B. Patel, and D. Garg, “Real-Time Object Detection and Recognition on Jetson Nano,” 2025, pp. 349–360, doi: 10.1007/978-981-97-8602-2_32.

[42] L. Rey et al., “A Performance Analysis of You Only Look Once Models for Deployment on Constrained Computational Edge Devices in Drone Applications,” Electronics, vol. 14, no. 3, p. 638, Feb. 2025, doi: 10.3390/electronics14030638.

[43] S. Saha and L. Xu, “Vision transformers on the edge: A comprehensive survey of model compression and acceleration strategies,” Neurocomputing, vol. 643, p. 130417, Aug. 2025, doi: 10.1016/j.neucom.2025.130417.

[44] Y. Ye, Q. Sun, K. Cheng, X. Shen, and D. Wang, “A lightweight mechanism for vision-transformer-based object detection,” Complex Intell. Syst., vol. 11, no. 7, p. 302, Jul. 2025, doi: 10.1007/s40747-025-01904-x.

[45] N. Setyawan, C.-C. Sun, M.-H. Hsu, W.-K. Kuo, and J.-W. Hsieh, “MicroViT: A Vision Transformer with Low Complexity Self Attention for Edge Device,” in 2025 IEEE International Symposium on Circuits and Systems (ISCAS), May 2025, pp. 1–5, doi: 10.1109/ISCAS56072.2025.11043206.

[46] V. Kamath and R. A, “Investigation of MobileNet-Ssd on human follower robot for stand-alone object detection and tracking using Raspberry Pi,” Cogent Eng., vol. 11, no. 1, Dec. 2024, doi: 10.1080/23311916.2024.2333208.

[47] X. Zhao and Y. Chen, “YOLO-DroneMS: Multi-Scale Object Detection Network for Unmanned Aerial Vehicle (UAV) Images,” Drones, vol. 8, no. 11, p. 609, Oct. 2024, doi: 10.3390/drones8110609.

[48] J. Li, Y. Hua, and M. Xue, “MSO-DETR: A Lightweight Detection Transformer Model for Small Object Detection in Maritime Search and Rescue,” Electronics, vol. 14, no. 12, p. 2327, Jun. 2025, doi: 10.3390/electronics14122327.

[49] Y. Kong, X. Shang, and S. Jia, “Drone-DETR: Efficient Small Object Detection for Remote Sensing Image Using Enhanced RT-DETR Model,” Sensors, vol. 24, no. 17, p. 5496, Aug. 2024, doi: 10.3390/s24175496.

[50] D. K. Alqahtani, M. A. Cheema, and A. N. Toosi, “Benchmarking Deep Learning Models for Object Detection on Edge Computing Daevices,” 2025, pp. 142–150, doi: 10.1007/978-981-96-0805-8_11.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
 andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0