Gabor-enhanced histogram of oriented gradients for human presence detection applied in aerial monitoring

(1) * Anton Louise Pernez De Ocampo Mail (De La Salle University, Philippines)
(2) Argel Bandala Mail (De La Salle University, Philippines)
(3) Elmer Dadios Mail (De La Salle University, Philippines)
*corresponding author

Abstract


In UAV-based human detection, the extraction and selection of the feature vector are one of the critical tasks to ensure the optimal performance of the detection system. Although UAV cameras capture high-resolution images, human figures' relative size renders persons at very low resolution and contrast. Feature descriptors that can adequately discriminate between local symmetrical patterns in a low-contrast image may improve a human figures' detection in vegetative environments. Such a descriptor is proposed and presented in this paper. Initially, the acquired images are fed to a digital processor in a ground station where the human detection algorithm is performed. Part of the human detection algorithm is the GeHOG feature extraction, where a bank of Gabor filters is used to generate textured images from the original. The local energy for each cell of the Gabor images is calculated to identify the dominant orientations. The bins of conventional HOG are enhanced based on the dominant orientation index and the accumulated local energy in Gabor images. To measure the performance of the proposed features, Gabor-enhanced HOG (GeHOG) and other two recent improvements to HOG, Histogram of Edge Oriented Gradients (HEOG) and Improved HOG (ImHOG), are used for human detection on INRIA dataset and a custom dataset of farmers working in fields captured via unmanned aerial vehicle. The proposed feature descriptor significantly improved human detection and performed better than recent improvements in conventional HOG. Using GeHOG improved the precision of human detection to 98.23% in the INRIA dataset. The proposed feature can significantly improve human detection applied in surveillance systems, especially in vegetative environments.

Keywords


Human Detection; Gabor filters; Local image patterns; Aerial monitoring; Surveillance

   

DOI

https://doi.org/10.26555/ijain.v6i3.514
      

Article metrics

Abstract views : 1123 | PDF views : 388

   

Cite

   

Full Text

Download

References


[1] V. Gajjar, Y. Khandhediya, and A. Gurnani, “Human Detection and Tracking for Video Surveillance: A Cognitive Science Approach,” in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 2805–2809, doi: 10.1109/ICCVW.2017.330.

[2] J.H. Kim, H.G. Hong, and K.R. Park, “Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors,” Sensors, vol. 17, no. 5, p. 1065, May 2017, doi: 10.3390/s17051065.

[3] L. Hou, W. Wan, J.-N. Hwang, R. Muhammad, M. Yang, and K. Han, “Human tracking over camera networks: a review,” EURASIP J. Adv. Signal Process., vol. 2017, no. 1, p. 43, Dec. 2017, doi: 10.1186/s13634-017-0482-z.

[4] B. U. Umar, J. Agajo, A. Aliyu, J. G. Kolo, O. S. Owolabi, and O. M. Olaniyi, “Human detection using speeded-up robust features and support vector machine from aerial images,” in 2017 IEEE 3rd International Conference on Electro-Technology for National Development (NIGERCON), 2017, pp. 577–586, doi: 10.1109/NIGERCON.2017.8281928.

[5] S. Zhang, Z. Wei, J. Nie, L. Huang, S. Wang, and Z. Li, “A Review on Human Activity Recognition Using Vision-Based Method,” J. Healthc. Eng., vol. 2017, pp. 1–31, 2017, doi: 10.1155/2017/3090343.

[6] V. V. Devyatkov, A. N. Alfimtsev, and A. R. Taranyan, “Multicamera Human Re-Identification based on Covariance Descriptor,” Pattern Recognit. Image Anal., vol. 28, no. 2, pp. 232–242, Apr. 2018, doi: 10.1134/S1054661818020025.

[7] D. Nguyen, K. Kim, H. Hong, J. Koo, M. Kim, and K. Park, “Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction,” Sensors, vol. 17, no. 3, p. 637, Mar. 2017, doi: 10.3390/s17030637.

[8] K. de Miguel, A. Brunete, M. Hernando, and E. Gambao, “Home Camera-Based Fall Detection System for the Elderly,” Sensors, vol. 17, no. 12, p. 2864, Dec. 2017, doi: 10.3390/s17122864.

[9] M. Buzzelli, A. Albé, and G. Ciocca, “A Vision-Based System for Monitoring Elderly People at Home,” Appl. Sci., vol. 10, no. 1, p. 374, Jan. 2020, doi: 10.3390/app10010374.

[10] N. AlDahoul, A. Q. Md Sabri, and A. M. Mansoor, “Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models,” Comput. Intell. Neurosci., vol. 2018, pp. 1–14, 2018, doi: 10.1155/2018/1639561.

[11] R. Llasag, D. Marcillo, C. Grilo, and C. Silva, “Human Detection for Search and Rescue Applications with UAVs and Mixed Reality Interfaces,” in 2019 14th Iberian Conference on Information Systems and Technologies (CISTI), 2019, pp. 1–6, doi: 10.23919/CISTI.2019.8760811.

[12] S. Y. Nikouei, Y. Chen, S. Song, R. Xu, B.-Y. Choi, and T. R. Faughnan, “Real-Time Human Detection as an Edge Service Enabled by a Lightweight CNN,” in 2018 IEEE International Conference on Edge Computing (EDGE), 2018, pp. 125–129, doi: 10.1109/EDGE.2018.00025.

[13] K. Liu, W. Wang, and J. Wang, “Pedestrian Detection with Lidar Point Clouds Based on Single Template Matching,” Electronics, vol. 8, no. 7, p. 780, Jul. 2019, doi: 10.3390/electronics8070780.

[14] B. Liu, H. Wu, W. Su, and J. Sun, “Sector-ring HOG for rotation-invariant human detection,” Signal Process. Image Commun., vol. 54, pp. 1–10, May 2017, doi: 10.1016/j.image.2017.02.008.

[15]D. Nguyen, H. Hong, K. Kim, and K. Park, “Person Recognition System Based on a Co mbination of Body Images from Visible Light and Thermal Cameras,” Sensors, vol. 17, no. 3, p. 605, Mar. 2017, doi: 10.3390/s17030605.

[16] G. Masuyama, T. Kawashita, and K. Umeda, “Complementary human detection and multiple feature based tracking using a stereo camera,” ROBOMECH J., vol. 4, no. 1, p. 24, Dec. 2017, doi: 10.1186/s40648-017-0092-4.

[17] Y. Yang, Q. Zhang, P. Wang, X. Hu, and N. Wu, “Moving Object Detection for Dynamic Background Scenes Based on Spatiotemporal Model,” Adv. Multimed., vol. 2017, pp. 1–9, 2017, doi: 10.1155/2017/5179013.

[18] M. Latah, “Human action recognition using support vector machines and 3D convolutional neural networks,” Int. J. Adv. Intell. Informatics, vol. 3, no. 1, p. 47, Mar. 2017, doi: 10.26555/ijain.v3i1.89.

[19] T. Liu, H. Y. Fu, Q. Wen, D. K. Zhang, and L. F. Li, “Extended faster R-CNN for long distance human detection: Finding pedestrians in UAV images,” in 2018 IEEE International Conference on Consumer Electronics (ICCE), 2018, pp. 1–2, doi: 10.1109/ICCE.2018.8326306.

[20] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893, doi: 10.1109/CVPR.2005.177.

[21] P. Hurney, E. Jones, P. Waldron, M. Glavin, and F. Morgan, “Night-time pedestrian classification with histograms of oriented gradients-local binary patterns vectors,” IET Intell. Transp. Syst., vol. 9, no. 1, pp. 75–85, Feb. 2015, doi: 10.1049/iet-its.2013.0163.

[22] C.-H. Zheng, W.-J. Pei, Q. Yan, and Y.-W. Chong, “Pedestrian detection based on gradient and texture feature integration,” Neurocomputing, vol. 228, pp. 71–78, Mar. 2017, doi: 10.1016/j.neucom.2016.09.085.

[23] C. Conde, D. Moctezuma, I. Martín De Diego, and E. Cabello, “HoGG: Gabor and HoG-based human detection for surveillance in non-controlled environments,” Neurocomputing, vol. 100, pp. 19–30, Jan. 2013, doi: 10.1016/j.neucom.2011.12.037.

[24] Q. LIU, W. ZHANG, H. LI, and K. N. NGAN, “Hybrid human detection and recognition in surveillance,” Neurocomputing, vol. 194, pp. 10–23, Jun. 2016, doi: 10.1016/j.neucom.2016.02.011.

[25] S. Yao, S. Pan, T. Wang, C. Zheng, W. Shen, and Y. Chong, “A new pedestrian detection method based on combined HOG and LSS features,” Neurocomputing, vol. 151, pp. 1006–1014, Mar. 2015, doi: 10.1016/j.neucom.2014.08.080.

[26] D. Sangeetha and P. Deepa, “A low-cost and high-performance architecture for robust human detection using histogram of edge oriented gradients,” Microprocess. Microsyst., vol. 53, pp. 106–119, Aug. 2017, doi: 10.1016/j.micpro.2017.07.009.

[27] S. A. Chowdhury, M. M. S. Kowsar, and K. Deb, “Human detection utilizing adaptive background mixture models and improved histogram of oriented gradients,” ICT Express, vol. 4, no. 4, pp. 216–220, Dec. 2018, doi: 10.1016/j.icte.2017.11.016.

[28] H.-K. Chen, X.-G. Zhao, S.-Y. Sun, and M. Tan, “PLS-CCA heterogeneous features fusion-based low-resolution human detection method for outdoor video surveillance,” Int. J. Autom. Comput., vol. 14, no. 2, pp. 136–146, Apr. 2017, doi: 10.1007/s11633-016-1029-8.

[29] A. L. P. de Ocampo, A. A. Bandala, and E. P. Dadios, “Coverage Path Planning on Multi-Depot, Fuel Constraint UAV Missions for Smart Farm Monitoring,” in 2018 IEEE Region Ten Symposium (Tensymp), 2018, pp. 13–18, doi: 10.1109/TENCONSpring.2018.8691955.

[30] A. L. P. De Ocampo and E. Dadios, “Radial greed algorithm with rectified chromaticity for anchorless region proposal applied in aerial surveillance,” Int. J. Adv. Intell. Informatics, vol. 5, no. 3, p. 193, Nov. 2019, doi: 10.26555/ijain.v5i3.426.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0