Radial greed algorithm with rectified chromaticity for anchorless region proposal applied in aerial surveillance

(1) * Anton Louise Pernez De Ocampo Mail (Batangas State University, Philippines)
(2) Elmer Dadios Mail (De La Salle University, Philippines)
*corresponding author

Abstract


In aerial images, human figures are often rendered at low resolution and in relatively small sizes compared to other objects in the scene, or resemble likelihood to other non-human objects. The localization of trust regions for possible containment of the human figure becomes difficult and computationally exhaustive. The objective of this work is to develop an anchorless region proposal which can emphasize potential persons from other objects and the vegetative background in aerial images. Samples are taken from different angles, altitudes and environmental factors such as illumination. The original image is rendered in rectified color space to create a pseudo-segmented version where objects of close chromaticity are combined. The geometric features of segments formed are then calculated and subjected to Radial-Greed Algorithm where segments resembling human figures are selected as the proposed regions for classification. The proposed method achieved 96.76% less computational cost against brute sliding window method and hit rate of 95.96%. In addition, the proposed method achieved 98.32 % confidence level that it can hit target proposals at least 92% every time.

Keywords


Radial greed algorithm; Anchorless region proposal; Human detection; Aerial surveillance; UAV-based monitoring

   

DOI

https://doi.org/10.26555/ijain.v5i3.426
      

Article metrics

Abstract views : 477 | PDF views : 122

   

Cite

   

Full Text

Download

References


[1] H. Il Son, “The contribution of force feedback to human performance in the teleoperation of multiple unmanned aerial vehicles,” J. Multimodal User Interfaces, vol. 13, no. 4, pp. 335–342, Dec. 2019, doi: 10.1007/s12193-019-00292-0.

[2] J. Hosang, R. Benenson, P. Dollar, and B. Schiele, “What Makes for Effective Detection Proposals?,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 4, pp. 814–830, Apr. 2016, doi: 10.1109/TPAMI.2015.2465908.

[3] M. Everingham, S. M. A. Eslami, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes Challenge: A Retrospective,” Int. J. Comput. Vis., vol. 111, no. 1, pp. 98–136, Jan. 2015, doi: 10.1007/s11263-014-0733-5.

[4] O. Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, Dec. 2015, doi: 10.1007/s11263-015-0816-y.

[5] H. Zhu, S. Lu, J. Cai, and G. Lee, “Diagnosing state-of-the-art object proposal methods,” in Procedings of the British Machine Vision Conference 2015, 2015, p. 11.1-11.12, doi: 10.5244/C.29.11.

[6] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.

[7] A. Salvador, X. Giro-i-Nieto, F. Marques, and S. Satoh, “Faster R-CNN Features for Instance Search,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016, pp. 394–401, doi: 10.1109/CVPRW.2016.56.

[8] P. Tang et al., “Weakly Supervised Region Proposal Network and Object Detection,” in The European Conference on Computer Vision (ECCV), 2018, available at : Google Scholar.

[9] B. Li, J. Yan, W. Wu, Z. Zhu, and X. Hu, “High Performance Visual Tracking with Siamese Region Proposal Network,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 8971–8980, doi: 10.1109/CVPR.2018.00935.

[10] H. Li, Y. Liu, W. Ouyang, and X. Wang, “Zoom Out-and-In Network with Map Attention Decision for Region Proposal and Object Detection,” Int. J. Comput. Vis., vol. 127, no. 3, pp. 225–238, Mar. 2019, doi: 10.1007/s11263-018-1101-7.

[11] J. Wang, K. Chen, S. Yang, C. C. Loy, and D. Lin, “Region Proposal by Guided Anchoring,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, available at : Google Scholar.

[12] H.-F. Lu, X. Du, and P.-L. Chang, “Toward Scale-Invariance and Position-Sensitive Region Proposal Networks,” in Computer Vision -- ECCV 2018, 2018, pp. 175–190, doi: 10.1007/978-3-030-01237-3_11.

[13] Q. Zhong, C. Li, Y. Zhang, D. Xie, S. Yang, and S. Pu, “Cascade region proposal and global context for deep object detection,” Neurocomputing, 2019, doi: 10.1016/j.neucom.2017.12.070.

[14] S. Gidaris and N. Komodakis, “Attend Refine Repeat: Active Box Proposal Generation via In-Out Localization,” in Procedings of the British Machine Vision Conference 2016, 2016, p. 90.1-90.13, doi: 10.5244/C.30.90.

[15] T. Kong, A. Yao, Y. Chen, and F. Sun, “HyperNet: Towards Accurate Region Proposal Generation and Joint Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 845–853, doi: 10.1109/CVPR.2016.98.

[16] Z. Jie, X. Liang, J. Feng, W. F. Lu, E. H. F. Tay, and S. Yan, “Scale-Aware Pixelwise Object Proposal Networks,” IEEE Trans. Image Process., vol. 25, no. 10, pp. 4525–4539, Oct. 2016, doi: 10.1109/TIP.2016.2593342.

[17] V. Jampani, D. Sun, M.-Y. Liu, M.-H. Yang, and J. Kautz, “Superpixel Sampling Networks,” 2018, pp. 363–380, doi: 10.1007/978-3-030-01234-2_22.

[18] W. Tu et al., “Learning Superpixels with Segmentation-Aware Affinity Loss,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 568–576, doi: 10.1109/CVPR.2018.00066.

[19] D. Stutz, A. Hermans, and B. Leibe, “Superpixels: An evaluation of the state-of-the-art,” Comput. Vis. Image Underst., vol. 166, pp. 1–27, Jan. 2018, doi: 10.1016/j.cviu.2017.03.007.

[20] N. Sun, F. Jiang, H. Yan, J. Liu, and G. Han, “Proposal generation method for object detection in infrared image,” Infrared Phys. Technol., vol. 81, pp. 117–127, Mar. 2017, doi: 10.1016/j.infrared.2016.12.021.

[21] Y. Zhang, X. Li, X. Gao, and C. Zhang, “A Simple Algorithm of Superpixel Segmentation with Boundary Constraint,” IEEE Trans. Circuits Syst. Video Technol., pp. 1–1, 2016, doi: 10.1109/TCSVT.2016.2539839.

[22] X. Chen, H. Ma, C. Zhu, X. Wang, and Z. Zhao, “Boundary-aware box refinement for object proposal generation,” Neurocomputing, vol. 219, pp. 323–332, Jan. 2017, doi: 10.1016/j.neucom.2016.09.045.

[23] J. Yan, Y. Yu, X. Zhu, Z. Lei, and S. Z. Li, “Object detection by labeling superpixels,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5107–5116, doi: 10.1109/CVPR.2015.7299146.

[24] Y. Yang, L. Guo, and Y. Ye, “Robust natural image segmentation by using spatially constrained multivariate mixed Student’s t -distribution and TV flow edge,” J. Vis. Commun. Image Represent., vol. 40, pp. 178–196, Oct. 2016, doi: 10.1016/j.jvcir.2016.06.022.

[25] N. K. O and C. Kim, “Unsupervised Texture Segmentation of Natural Scene Images Using Region-based Markov Random Field,” J. Signal Process. Syst., vol. 83, no. 3, pp. 423–436, Jun. 2016, doi: 10.1007/s11265-015-1030-4.

[26] D. Yeo, J. Son, B. Han, and J. H. Han, “Superpixel-Based Tracking-by-Segmentation Using Markov Chains,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 511–520, doi: 10.1109/CVPR.2017.62.

[27] A. L. P. de Ocampo and E. P. Dadios, “Mobile Platform Implementation of Lightweight Neural Network Model for Plant Disease Detection and Recognition,” in 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM), 2018, pp. 1–4, doi: 10.1109/HNICEM.2018.8666365.

[28] R. G. de Luna et al., “Identification of philippine herbal medicine plant leaf using artificial neural network,” in 2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2017, pp. 1–8, doi: 10.1109/HNICEM.2017.8269470.

[29] I. C. Valenzuela et al., “Quality assessment of lettuce using artificial neural network,” in 2017IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2017, pp. 1–5, doi: 10.1109/HNICEM.2017.8269506.

[30] C. Eggert, S. Brehm, A. Winschel, D. Zecha, and R. Lienhart, “A closer look: Small object detection in faster R-CNN,” in 2017 IEEE International Conference on Multimedia and Expo (ICME), 2017, pp. 421–426, doi: 10.1109/ICME.2017.8019550.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by Informatics Department - Universitas Ahmad Dahlan,  UTM Big Data Centre - Universiti Teknologi Malaysia, and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org, andri.pranolo@tif.uad.ac.id (paper handling issues)
     ijain@uad.ac.id, andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0