Identifying threat objects using faster region-based convolutional neural networks (faster R-CNN)

(1) * Reagan Galvez Mail (Bulacan State University, Philippines)
(2) Elmer Pamisa Dadios Mail (De La Salle University, Philippines)
*corresponding author

Abstract


Automated detection of threat objects in a security X-ray image is vital to prevent unwanted incidents in busy places like airports, train stations, and malls. The manual method of threat object detection is time-consuming and tedious. Also, the person on duty can overlook the threat objects due to limited time in checking every person’s belongings. As a solution, this paper presents a faster region-based convolutional neural network (Faster R-CNN) object detector to automatically identify threat objects in an X-ray image using the IEDXray dataset. The dataset was composed of scanned X-ray images of improvised explosive device (IED) replicas without the main charge. This paper extensively evaluates the Faster R-CNN architecture in threat object detection to determine which configuration can be used to improve the detection performance. Our findings showed that the proposed method could identify three classes of threat objects in X-ray images. In addition, the mean average precision (mAP) of the threat object detector could be improved by increasing the input image's image resolution but sacrificing the detector's speed. The threat object detector achieved 77.59% mAP and recorded an inference time of 208.96 ms by resizing the input image to 900 × 1536 resolution. Results also showed that increasing the bounding box proposals did not significantly improve the detection performance. The mAP using 150 bounding box proposals only achieved 75.65% mAP, and increasing the bounding box proposal twice reduced the mAP to 72.22%.

   

DOI

https://doi.org/10.26555/ijain.v8i3.952
      

Article metrics

Abstract views : 764 | PDF views : 148

   

Cite

   

Full Text

Download

References


[1] C. Schmeitz, D. Barten, K. Van Barneveld, H. De Cauwer, L. Mortelmans, F. Van Osch, J. Wijnands, E. C. Tan, and A. Boin, “Terrorist Attacks Against Emergency Medical Services: Secondary Attacks are an Emerging Risk,” Prehos. Disast. Med., vol. 37, no. 2, pp. 185-191, 2022, doi: 10.1017/S1049023X22000140.

[2] S. Buigut, B. Kapar, and U. Braendle, “Effect of regional terrorism events on Malaysian tourism demand,” Tour. and Hospit. Res., vol. 22, no 3., pp. 271–283.

[3] Institute for Economics & Peace, “Global terrorism index 2018: measuring the impact of terrorism.” 2022, Accessed : Dec, 20, 2022. [Online]. Available : http://visionofhumanity.org/reports/

[4] V. Riffo, S. Flores, and D. Mery, “Threat Objects Detection in X-ray Images Using an Active Vision Approach,” J. Nondestruct. Eval., vol. 36, no. 3, p. 44, Sep. 2017, doi: 10.1007/s10921-017-0419-3.

[5] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi: 10.1109/TPAMI.2016.2577031.

[6] H. Ji, Z. Gao, T. Mei, and Y. Li, “Improved faster r-cnn with multiscale feature fusion and homography augmentation for vehicle detection in remote sensing images,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 11, pp. 1761–1765, 2019, doi: 10.1109/LGRS.2019.2909541.

[7] G. Zhou, W. Zhang, A. Chen, M. He, and X. Ma, “Rapid detection of rice disease based on FCM-KM and faster r-cnn fusion,” IEEE Access, vol. 7, pp. 143190–143206, 2019, doi: 10.1109/ACCESS.2019.2943454.

[8] F. Deng, W. Mao, Z. Zeng, H. Zeng, and B. Wei, “Multiple diseases and pests detection based on federated learning and improved faster R-CNN,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–11, 2022, doi: 10.1109/TIM.2022.3201937.

[9] W. Wu, Y. Yin, X. Wang, and D. Xu, “Face detection with different scales based on Faster R-CNN,” IEEE Trans. Cybern., vol. 49, no. 11, pp. 4017–4028, Nov. 2019, doi: 10.1109/TCYB.2018.2859482.

[10] P. J. Lu and J.-H. Chuang, “Fusion of multi-intensity image for deep learning-based human and face detection,” IEEE Access, vol. 10, pp. 8816–8823, 2022, doi: 10.1109/ACCESS.2022.3143536.

[11] Z. Lin, K. Ji, X. Leng, and G. Kuang, “Squeeze and excitation rank Faster R-CNN for ship detection in SAR images,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 5, pp. 751–755, May 2019, doi: 10.1109/LGRS.2018.2882551.

[12] Y. Li, S. Zhang, and W.-Q. Wang, “A lightweight faster R-CNN for ship detection in SAR images,” IEEE Geosci. Remote Sens. Lett., vol. 19, pp. 1–5, 2022, doi: 10.1109/LGRS.2020.3038901.

[13] R. Gao et al., “Small foreign metal objects detection in X-Ray images of clothing products using faster R-CNN and feature pyramid network,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1–11, 2021, doi: 10.1109/TIM.2021.3077666.

[14] R. Gonzales-Martinez, J. Machacuay, P. Rotta, and C. Chinguel, “Hyperparameters tuning of faster R-CNN deep learning transfer for persistent object detection in radar images,” IEEE Lat. Am. Trans., vol. 20, no. 4, pp. 677–685, Apr. 2022, doi: 10.1109/TLA.2022.9675474.

[15] Y. Zhang, Z. Zhang, K. Fu, and X. Luo, “Adaptive defect detection for 3-D printed lattice structures based on improved faster R-CNN,” IEEE Trans. Instrum. Meas., vol. 71, pp. 1–9, 2022, doi: 10.1109/TIM.2022.3200362.

[16] F. Selamet, S. Cakar, and M. Kotan, “Automatic detection and classification of defective areas on metal parts by using adaptive fusion of faster R-CNN and shape from shading,” IEEE Access, vol. 10, pp. 126030–126038, 2022, doi: 10.1109/ACCESS.2022.3224037.

[17] Y. Liu, Z. Ma, X. Liu, S. Ma, and K. Ren, “Privacy-preserving object detection for medical images with faster R-CNN,” IEEE Trans. Inf. Forensics Secur., vol. 17, pp. 69–84, 2022, doi: 10.1109/TIFS.2019.2946476.

[18] Z. Qian et al., “A new approach to polyp detection by pre-processing of images and enhanced faster R-CNN,” IEEE Sens. J., vol. 21, no. 10, pp. 11374–11381, May 2021, doi: 10.1109/JSEN.2020.3036005.

[19] G. Wang, J. Guo, Y. Chen, Y. Li, and Q. Xu, “A PSO and BFO-based learning strategy applied to Faster R-CNN for object detection in autonomous driving,” IEEE Access, vol. 7, pp. 18840–18859, 2019, doi: 10.1109/ACCESS.2019.2897283.

[20] S. Akcay, M. E. Kundegorski, C. G. Willcocks, and T. P. Breckon, “Using deep convolutional neural network architectures for object classification and detection within X-ray baggage security imagery,” IEEE Trans. Inf. Forensics Secur., vol. 13, no. 9, pp. 2203–2215, Sep. 2018, doi: 10.1109/TIFS.2018.2812196.

[21] D. Mery, D. Saavedra, and M. Prasad, “X-Ray baggage inspection with computer vision: a survey,” IEEE Access, vol. 8, pp. 145620–145633, 2020, doi: 10.1109/ACCESS.2020.3015014.

[22] J. Zhang, X. Song, J. Feng, and J. Fei, “X-Ray image recognition based on improved Mask R-CNN algorithm,” Math. Probl. Eng., vol. 2021, pp. 1–14, Sep. 2021, doi: 10.1155/2021/6544325.

[23] B. Gu, R. Ge, Y. Chen, L. Luo, and G. Coatrieux, “Automatic and robust object detection in X-Ray baggage inspection using deep convolutional neural networks,” IEEE Trans. Ind. Electron., vol. 68, no. 10, pp. 10248–10257, Oct. 2021, doi: 10.1109/TIE.2020.3026285.

[24] M. Baştan, “Multi-view object detection in dual-energy X-ray images,” Mach. Vis. Appl., vol. 26, no. 7–8, pp. 1045–1060, Nov. 2015, doi: 10.1007/s00138-015-0706-x.

[25] R. L. Galvez, E. P. Dadios, A. A. Bandala, and R. R. P. Vicerra, “Object detection in x-ray images using transfer learning with data augmentation,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 9, no. 6, p. 2147, Dec. 2019, doi: 10.18517/ijaseit.9.6.9960.

[26] R. L. Galvez and E. P. Dadios, “Threat object detection and analysis for explosive ordnance disposal robot,” Glob. J. Eng. Technol. Adv., vol. 11, no. 1, pp. 078–087, Apr. 2022, doi: 10.30574/gjeta.2022.11.1.0074.

[27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.

[28] Tzutalin, “LabelImg,” Github. 2015, [Online]. Available : https://github.com/tzutalin/labelImg

[29] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks, vol. 12, no. 1, pp. 145–151, 1999, doi: 10.1016/s0893-6080(98)00116-6.

[30] A. Hernandez-Garcia and P. König, “Data augmentation instead of explicit regularization,” CoRR, vol. abs/1806.0, 2018, [Online]. Available : http://arxiv.org/abs/1806.03852.

[31] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, Jun. 2010, doi: 10.1007/s11263-009-0275-4.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0