A novel convolutional feature-based method for predicting limited mobility eye gaze direction

(1) * Amal Hameed Khaleel Mail (Basrah University, Iraq)
(2) Thekra H Abbas Mail (Mustansiriyah University, Iraq)
(3) Abdul-Wahab Sami Ibrahim Mail (Mustansiriyah University, Iraq)
*corresponding author

Abstract


Eye gaze direction is a critical issue since several applications in computer vision technology rely on determining gaze direction, where individuals move their eyes to limited mobility locations for sensory information. Deep neural networks are considered one of the most essential and accurate image classification methods. Several methods of classification to determine the direction of the gaze employ convolutional neural network models, which are VGG, ResNet, Alex Net, etc. This research presents a new method of identifying human eye images and classifying eye gaze directions (left, right, up, down, straight) in addition to eye-closing discrimination. The proposed method (Di-eyeNET) stands out from the developed method (Split-HSV) for enhancing image lighting. It also reduces implementation time by utilizing only two blocks and employing dropout layers after each block to achieve fast response times and high accuracy. It focused on the characteristics of the human eye images, as it is small, so it cannot be greatly enlarged, and the eye's iris is in the middle of the image, so the edges are not important. The proposed method achieves excellent results compared to previous methods, classifying the five directions of eye gaze instead of the four directions. Both the global dataset and the built local dataset were utilized. Compared to previous methods, the suggested method's results demonstrate high accuracy (99%), minimal loss, and the lowest training time. The research benefits include an efficient method for classifying eye gaze directions, with faster implementation and improved image lighting.

Keywords


Convolutional Neural Networks; Computer Vision; Eye Gaze; Haar Cascade; Iris

   

DOI

https://doi.org/10.26555/ijain.v10i2.1370
      

Article metrics

Abstract views : 217 | PDF views : 56

   

Cite

   

Full Text

Download

References


[1] F. Utaminingrum, M. A. Fauzi, Y. A. Sari, R. Primaswara, and S. Adinugroho, “Eye Movement as Navigator for Disabled Person,” in Proceedings of the 2016 International Conference on Communication and Information Systems, Dec. 2016, pp. 1–5, doi: 10.1145/3023924.3023926.

[2] R. Raj Bharath, M. E. Associate Professor, D. L. Kumar, and M. SundaramT, “Controlling Mouse and Virtual Keyboard using Eye-Tracking by Computer Vision,” J. Algebr. Stat., vol. 13, no. 3, pp. 3354–3368, Jul. 2022. [Online]. Available at: https://www.publishoa.com/index.php/journal/article/view/1115.

[3] M. Danelljan, G. Hager, F. S. Khan, and M. Felsberg, “Convolutional Features for Correlation Filter Based Visual Tracking,” in 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), Dec. 2015, vol. 2015-Febru, pp. 621–629, doi: 10.1109/ICCVW.2015.84.

[4] Z. Savaş, “Real-time detection and tracking of human eyes in video sequences,” Middle East Technical University, p. 114, 2005. [Online]. Available at: https://open.metu.edu.tr/handle/11511/15424.

[5] A. Servais, C. Hurter, and E. J. Barbeau, “Gaze direction as a facial cue of memory retrieval state,” Front. Psychol., vol. 13, p. 1063228, Dec. 2022, doi: 10.3389/fpsyg.2022.1063228.

[6] S. M. Asish, A. K. Kulshreshth, and C. W. Borst, “User Identification Utilizing Minimal Eye-Gaze Features in Virtual Reality Applications,” Virtual Worlds, vol. 1, no. 1, pp. 42–61, Sep. 2022, doi: 10.3390/virtualworlds1010004.

[7] A. H. Khaleel, T. H. Abbas, and A.-W. S. Ibrahim, “Enhancing Human-Computer Interaction: A Comprehensive Analysis of Assistive Virtual Keyboard Technologies,” Ingénierie des systèmes d Inf., vol. 28, no. 6, pp. 1709–1717, Dec. 2023, doi: 10.18280/isi.280630.

[8] M. Zhao et al., “Gaze Speedup: Eye Gaze Assisted Gesture Typing in Virtual Reality,” in Proceedings of the 28th International Conference on Intelligent User Interfaces, Mar. 2023, pp. 595–606, doi: 10.1145/3581641.3584072.

[9] Y. Tong, W. Lu, Y. Yu, and Y. Shen, “Application of machine learning in ophthalmic imaging modalities,” Eye Vis., vol. 7, no. 1, p. 22, Dec. 2020, doi: 10.1186/s40662-020-00183-6.

[10] N. M. Khassaf and S. H. Shaker, “Image Retrieval based Convolutional Neural Network,” Al-Mustansiriyah J. Sci., vol. 31, no. 4, pp. 43–54, Dec. 2020, doi: 10.23851/mjs.v31i4.897.

[11] A.-H. A. El-Shafie, M. Zaki, and S. E. D. Habib, “Fast CNN-Based Object Tracking Using Localization Layers and Deep Features Interpolation,” in 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Jun. 2019, pp. 1476–1481, doi: 10.1109/IWCMC.2019.8766466.

[12] M. Haqi Al-Tai, B. M. Nema, and A. Al-Sherbaz, “Deep Learning for Fake News Detection: Literature Review,” Al-Mustansiriyah J. Sci., vol. 34, no. 2, pp. 70–81, Jun. 2023, doi: 10.23851/mjs.v34i2.1292.

[13] Y. Wang, T. Shen, G. Yuan, J. Bian, and X. Fu, “Appearance-based gaze estimation using deep features and random forest regression,” Knowledge-Based Syst., vol. 110, pp. 293–301, Oct. 2016, doi: 10.1016/j.knosys.2016.07.038.

[14] J. Lemley, A. Kar, A. Drimbarean, and P. Corcoran, “Convolutional Neural Network Implementation for Eye-Gaze Estimation on Low-Quality Consumer Imaging Systems,” IEEE Trans. Consum. Electron., vol. 65, no. 2, pp. 179–187, May 2019, doi: 10.1109/TCE.2019.2899869.

[15] H. Huang, Y. Xu, X. Hua, W. Yan, and Y. Huang, “A crowdsourced system for robust eye tracking,” J. Vis. Commun. Image Represent., vol. 60, pp. 28–32, Apr. 2019, doi: 10.1016/j.jvcir.2019.01.007.

[16] I. Rakhmatulin and A. T. Duchowski, “Deep Neural Networks for Low-Cost Eye Tracking,” Procedia Comput. Sci., vol. 176, pp. 685–694, Jan. 2020, doi: 10.1016/j.procs.2020.09.041.

[17] M. F. Ansari, P. Kasprowski, and M. Obetkal, “Gaze Tracking Using an Unmodified Web Camera and Convolutional Neural Network,” Appl. Sci., vol. 11, no. 19, p. 9068, Sep. 2021, doi: 10.3390/app11199068.

[18] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, Sep. 2014, pp. 1–14. [Online]. Available at: https://arxiv.org/abs/1409.1556.

[19] G. A. Shadeed, M. A. Tawfeeq, and S. M. Mahmoud, “Automatic Medical Images Segmentation Based on Deep Learning Networks,” IOP Conf. Ser. Mater. Sci. Eng., vol. 870, no. 1, p. 012117, Jun. 2020, doi: 10.1088/1757-899X/870/1/012117.

[20] E. Mohamed, K. Sirlantzis, and G. Howells, “A review of visualisation-as-explanation techniques for convolutional neural networks and their evaluation,” Displays, vol. 73, p. 102239, Jul. 2022, doi: 10.1016/j.displa.2022.102239.

[21] J. Peng et al., “Residual convolutional neural network for predicting response of transarterial chemoembolization in hepatocellular carcinoma from CT imaging,” Eur. Radiol., vol. 30, no. 1, pp. 413–424, Jan. 2020, doi: 10.1007/s00330-019-06318-1.

[22] M. A. Wani, F. A. Bhat, S. Afzal, and A. I. Khan, Advances in Deep Learning, vol. 57. Singapore: Springer Singapore, p. 149, 2020, doi: 10.1007/978-981-13-6794-6.

[23] Z. Zhou, X. Yang, H. Ji, and Z. Zhu, “Improving the classification accuracy of fishes and invertebrates using residual convolutional neural networks,” ICES J. Mar. Sci., vol. 80, no. 5, pp. 1256–1266, Jun. 2023, doi: 10.1093/icesjms/fsad041.

[24] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778, doi: 10.1109/CVPR.2016.90.

[25] Ezenwobodo and S. Samuel, “An Assessment on the use of Mathematical Softwares in Teaching and Learning of Mathematics in Colleges of Education in South-Eastern Nigeria: A Case Study of Anambra and Enugu,” Int. J. Res. Publ. Rev., vol. 04, no. 01, pp. 1806–1812, 2022, doi: 10.55248/gengpi.2023.4149.

[26] S. Liu, D. Liu, G. Srivastava, D. Połap, and M. Woźniak, “Overview and methods of correlation filter algorithms in object tracking,” Complex Intell. Syst., vol. 7, no. 4, pp. 1895–1917, Aug. 2021, doi: 10.1007/s40747-020-00161-4.

[27] Y. Zhang, X. Tian, N. Jia, F. Wang, and L. Jiao, “Deep tracking using double-correlation filters by membership weighted decision,” Pattern Recognit. Lett., vol. 136, pp. 161–167, Aug. 2020, doi: 10.1016/J.PATREC.2020.06.004.

[28] R. M. Abdullah, S. A. H. Alazawi, and P. Ehkan, “SAS-HRM: Secure Authentication System for Human Resource Management,” Al-Mustansiriyah J. Sci., vol. 34, no. 3, pp. 64–71, Sep. 2023, doi: 10.23851/mjs.v34i3.1332.

[29] P. Viola and M. J. Jones, “Robust Real-Time Face Detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004, doi: 10.1023/B:VISI.0000013087.49260.fb.

[30] N. Kamarudin et al., “Implementation of Haar Cascade Classifier and Eye Aspect Ratio for Driver Drowsiness Detection Using Raspberry Pi,” Univers. J. Electr. Electron. Eng., vol. 6, no. 5B, pp. 67–75, Dec. 2019, doi: 10.13189/ujeee.2019.061609.

[31] M. Vitek, P. Rot, V. Štruc, and P. Peer, “A comprehensive investigation into sclera biometrics: a novel dataset and performance study,” Neural Comput. Appl., vol. 32, no. 24, pp. 17941–17955, Dec. 2020, doi: 10.1007/s00521-020-04782-1.

[32] P. Rot, M. Vitek, K. Grm, Ž. Emeršič, P. Peer, and V. Štruc, “Deep Sclera Segmentation and Recognition,” in Advances in Computer Vision and Pattern Recognition, Springer, 2020, pp. 395–432, doi: 10.1007/978-3-030-27731-4_13.

[33] P. Rot, Z. Emersic, V. Struc, and P. Peer, “Deep Multi-class Eye Segmentation for Ocular Biometrics,” in 2018 IEEE International Work Conference on Bioinspired Intelligence (IWOBI), Jul. 2018, pp. 1–8, doi: 10.1109/IWOBI.2018.8464133.

[34] Z. Ali, U. Park, J. Nang, J. S. Park, T. Hong, and S. Park, “Periocular Recognition Using uMLBP and Attribute Features,” KSII Trans. Internet Inf. Syst., vol. 11, no. 12, pp. 6133–6151, Dec. 2017, doi: 10.3837/tiis.2017.12.024.

[35] M. Grandini, E. Bagli, and G. Visani, “Metrics for Multi-Class Classification: an Overview,” arXiv, pp. 1–17, Aug. 2020. [Online]. Available: https://arxiv.org/abs/2008.05756v1.

[36] A. Haque, I. Sutradhar, M. Rahman, M. Hasan, and M. Sarker, “Convolutional Nets for Diabetic Retinopathy Screening in Bangladeshi Patients,” arX, pp. 1–8, Jul. 2021. [Online]. Available at: https://arxiv.org/abs/2108.04358v1.

[37] D. S. Kermany et al., “Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning,” Cell, vol. 172, no. 5, pp. 1122-1131.e9, Feb. 2018, doi: 10.1016/j.cell.2018.02.010.

[38] V. Silaparasetty, Deep Learning Projects Using TensorFlow 2. Berkeley, CA: Apress, p. 421, 2020, doi: 10.1007/978-1-4842-5802-6.

[39] M. Hasan, L. Islam, I. Jahan, S. M. Meem, and R. M. Rahman, “Natural Language Processing and Sentiment Analysis on Bangla Social Media Comments on Russia–Ukraine War Using Transformers,” Vietnam J. Comput. Sci., vol. 10, no. 03, pp. 329–356, Aug. 2023, doi: 10.1142/S2196888823500021.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0