(2) James Rio Sasuwuk
(3) Septiano Anggun Pratama
(4) Rahma Laila
*corresponding author
AbstractThe COVID-19 pandemic was a devastating disaster for humanity worldwide. All aspects of life were disrupted, including daily activities and education. The education sector faced significant challenges at all levels, from kindergarten to elementary, junior high, and high school, as well as in higher education, where learning had to be online. Human emotions are primarily conveyed through facial expressions resulting from facial muscle movements. Facial expressions serve as a form of nonverbal communication, reflecting a person’s thoughts and emotions. This research aims to classify emotions based on facial expressions using the Convolutional Neural Network (CNN) and detect faces using the Viola-Jones method in video recordings of online meetings. We utilize the VGG-16 architecture, which consists of 16 layers, including convolutional layers with the ReLU activation function and pooling layers, specifically max pooling. The fully connected layer also employs the ReLU activation function, while the output layer uses the Softmax. The Viola-Jones method is used for facial detection in images, achieving an accuracy of 87.6% in locating faces. Meanwhile, the CNN method is applied for facial expression recognition, with an accuracy of 59.8% in classifying emotions.
KeywordsCovid-19; Emotion; Viola-Jones; CNN; VGG-16; Classification
|
DOIhttps://doi.org/10.26555/ijain.v11i1.1602 |
Article metricsAbstract views : 272 | PDF views : 33 |
Cite |
Full Text Download
|
References
[1] Z. Abidin and Alamsyah, “Wavelet based approach for facial expression recognition,” Int. J. Adv. Intell. Informatics, vol. 1, no. 1, pp. 7–14, 2015, doi: 10.26555/ijain.v1i1.7.
[2] M. Sajjad et al., “A comprehensive survey on deep facial expression recognition: challenges, applications, and future guidelines,” Alexandria Eng. J., vol. 68, pp. 817–840, 2023, doi: 10.1016/j.aej.2023.01.017.
[3] D. Y. Liliana, “Emotion recognition from facial expression using deep convolutional neural network,” J. Phys. Conf. Ser., vol. 1193, no. 1, 2019, doi: 10.1088/1742-6596/1193/1/012004.
[4] J. H. Kim, B. G. Kim, P. P. Roy, and D. M. Jeong, “Efficient facial expression recognition algorithm based on hierarchical deep neural network structure,” IEEE Access, vol. 7, pp. 41273–41285, 2019, doi: 10.1109/ACCESS.2019.2907327.
[5] H. Sheikh, C. Prins, and E. Schrijvers, “Mission AI: The new system technology, ” Springer, pp. 1- 385, 2023, doi: 10.1007/978-3-031-21448-6.
[6] E. Kurniawan et al., “Deep neural network-based physical distancing monitoring system with tensorRT optimization,” Int. J. Adv. Intell. Informatics, vol. 8, no. 2, pp. 185–198, 2022, doi: 10.26555/ijain.v8i2.824.
[7] J. Videira, P. D. Gaspar, V. N. da G. de J. Soares, and J. M. L. P. Caldeira, “Detecting and monitoring the development stages of wild flowers and plants using computer vision: approaches, challenges and opportunities,” Int. J. Adv. Intell. Informatics, vol. 9, no. 3, p. 347, Oct. 2023, doi: 10.26555/ijain.v9i3.1012.
[8] V. Wiley and T. Lucas, “Computer Vision and Image Processing: A Paper Review,” Int. J. Artif. Intell. Res., vol. 2, no. 1, p. 22, Jun. 2018, doi: 10.29099/ijair.v2i1.42.
[9] P. Tarnowski, M. Ko?odziej, A. Majkowski, and R. J. Rak, “Emotion recognition using facial expressions,” Procedia Comput. Sci., vol. 108, pp. 1175–1184, 2017, doi: 10.1016/j.procs.2017.05.025.
[10] E. S. Nugroho, I. Ardiyanto, and H. A. Nugroho, “Systematic literature review of dermoscopic pigmented skin lesions classification using convolutional neural network (CNN),” Int. J. Adv. Intell. Informatics, vol. 9, no. 3, p. 363, Oct. 2023, doi: 10.26555/ijain.v9i3.961.
[11] R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Springer Insights Imaging, vol. 9, pp. 611–629, 2018, doi: 10.1007/978-981-15-7078-0_3.
[12] I. A. Anjani, Y. R. Pratiwi, and S. Norfa Bagas Nurhuda, “Implementation of Deep Learning Using Convolutional Neural Network Algorithm for Classification Rose Flower,” J. Phys. Conf. Ser., vol. 1842, no. 1, 2021, doi: 10.1088/1742-6596/1842/1/012002.
[13] A. A. Kurniawan, S. Madenda, S. Wirawan, and R. J. Suhatril, “Multidisciplinary classification for Indonesian scientific articles abstract using pre-trained BERT model,” Int. J. Adv. Intell. Informatics, vol. 9, no. 2, p. 331, Jul. 2023, doi: 10.26555/ijain.v9i2.1051.
[14] Hanafi, A. Pranolo, and Y. Mao, “Cae-covidx: Automatic covid-19 disease detection based on x-ray images using enhanced deep convolutional and autoencoder,” Int. J. Adv. Intell. Informatics, vol. 7, no. 1, pp. 49–62, 2021, doi: 10.26555/ijain.v7i1.577.
[15] M. Bramer, Principles of Data Mining. London: Springer London, pp. 1-328, 2020, doi: 10.1007/978-1-4471-7493-6.
[16] H. Hanafi, N. Suryana, and A. S. H. Basari, “Dynamic convolutional neural network for eliminating item sparse data on recommender system,” Int. J. Adv. Intell. Informatics, vol. 4, no. 3, p. 226, Nov. 2018, doi: 10.26555/ijain.v4i3.291.
[17] S. Indolia, A. K. Goswami, S. P. Mishra, and P. Asopa, “Conceptual Understanding of Convolutional Neural Network- A Deep Learning Approach,” Procedia Comput. Sci., vol. 132, pp. 679–688, Jan. 2018, doi: 10.1016/j.procs.2018.05.069.
[18] M. T. Vignesh and K. M. Umamaheswari, “Facial expression recognition using Eigen face approach,” Int. J. Health Sci. (Qassim)., vol. 6, no. March, pp. 1309–1315, 2022, doi: 10.53730/ijhs.v6ns3.5552.
[19] H. Echoukairi, M. El Ghmary, S. Ziani, and A. Ouacha, “Improved Methods for Automatic Facial Expression Recognition,” Int. J. Interact. Mob. Technol., vol. 17, no. 6, pp. 33–44, 2023, doi: 10.3991/ijim.v17i06.37031.
[20] U. B. Chavan, “Facial Expression Recognition- Review,” in Conference: IJLTET, 2013, vol. 3, no. 1, pp. 237–243, [Online]. Available at: https://www.researchgate.net/publication/.
[21] A. A. Elngar, M. Arafa, A. E. R. A. Naeem, A. R. Essa, and Z. A. shaaban, “The Viola-Jones Face Detection Algorithm Analysis: A Survey,” J. Cybersecurity Inf. Manag., no. June, pp. 85–95, 2021, doi: 10.54216/jcim.060201.
[22] D. M. Abdulhussien and L. J. Saud, “evaluation study of face detection by Viola-Jones algorithm,” Int. J. Health Sci. (Qassim)., vol. 6, no. August, pp. 4174–4182, Sep. 2022, doi: 10.53730/ijhs.v6nS8.13127.
[23] R. R. Damanik, D. Sitanggang, H. Pasaribu, H. Siagian, and F. Gulo, “An application of viola jones method for face recognition for absence process efficiency,” J. Phys. Conf. Ser., vol. 1007, no. 1, 2018, doi: 10.1088/1742-6596/1007/1/012013.
[24] K. D. Ismael and S. Irina, “Face recognition using Viola-Jones depending on Python,” Indones. J. Electr. Eng. Comput. Sci., vol. 20, no. 3, pp. 1513–1521, 2020, doi: 10.11591/ijeecs.v20.i3.pp1513-1521.
[25] M. A. Rajab and K. M. Hashim, “An automatic lip reading for short sentences using deep learning nets,” Int. J. Adv. Intell. Informatics, vol. 9, no. 1, pp. 15–26, 2023, doi: 10.26555/ijain.v9i1.920.
[26] Y.-Q. Wang, “An Analysis of the Viola-Jones Face Detection Algorithm,” Image Process. Line, vol. 4, pp. 128–148, 2014, doi: 10.5201/ipol.2014.104.
[27] T. Mita, T. Kaneko, and O. Hori, “Joint Haar-like features for face detection,” Proc. IEEE Int. Conf. Comput. Vis., vol. II, no. November, pp. 1619–1626, 2005, doi: 10.1109/ICCV.2005.129.
[28] I. G. Susrama, M. Diyasa, A. H. Putra, M. Rafka, and M. Ariefwan, “Feature Extraction for Face Recognition Using Haar Cascade Classifier,” vol. 2022, pp. 197–206, 2022, doi: 10.11594/nstp.2022.2432.
[29] P. Menezes, J. C. Barreto, and J. Dias, “Face tracking based on haar-like features and eigenfaces,” IFAC Proc. Vol., vol. 37, no. 8, pp. 304–309, 2004, doi: 10.1016/s1474-6670(17)31993-6.
[30] A. Mohamed, A. Issam, B. Mohamed, and B. Abdellatif, “Real-time Detection of Vehicles Using the Haar-like Features and Artificial Neuron Networks,” Procedia Comput. Sci., vol. 73, no. Awict, pp. 24–31, 2015, doi: 10.1016/j.procs.2015.12.044.
[31] Y. Ding, H. Zhu, R. Chen, and R. Li, “An Efficient AdaBoost Algorithm with the Multiple Thresholds Classification,” Appl. Sci., vol. 12, no. 12, 2022, doi: 10.3390/app12125872.
[32] R. Wang, “AdaBoost for Feature Selection, Classification and Its Relation with SVM, A Review,” Phys. Procedia, vol. 25, pp. 800–807, 2012, doi: 10.1016/j.phpro.2012.03.160.
[33] O. Hornyák and L. B. Iantovics, “AdaBoost Algorithm Could Lead to Weak Results for Data with Certain Characteristics,” Mathematics, vol. 11, no. 8, 2023, doi: 10.3390/math11081801.
[34] R. Senkamalavalli, M. Balamurugan, R. N. Sundara, and N. Ramshankar, “Improved classification of breast cancer data using hybrid techniques,” Cardiometry, vol. 6495, no. 26, pp. 487–490, 2023, doi: 10.18137/cardiometry.2023.26.487490.
[35] B. R. Maale and S. Nandyal, “Face detection using Haar cascade classifier,” Int. J. Sci. Res., vol. 10, no. 3, pp. 2019–2022, 2021, doi: 10.2139/ssrn.4157631.
[36] R. Padilla, C. C. Filho, and M. Costa, “Evaluation of haar cascade classifiers designed for face detection,” J. WASET, vol. 6, no. 4, pp. 323–326, 2012. [Online]. Available at: https://www.researchgate.net/publication/303251696_Evaluation_of_Haar_Cascade_Classifiers_for_Face_Detection.
[37] B. Liu et al., “A novel compact design of convolutional layers with spatial transformation towards lower-rank representation for image classification,” Knowledge-Based Syst., vol. 255, p. 109723, 2022, doi: 10.1016/j.knosys.2022.109723.
[38] H. Yakura, S. Shinozaki, R. Nishimura, Y. Oyama, and J. Sakuma, “Malware analysis of imaged binary samples by convolutional neural network with attention mechanism,” CODASPY 2018 - Proc. 8th ACM Conf. Data Appl. Secur. Priv., vol. 2018-Janua, pp. 127–134, 2018, doi: 10.1145/3176258.3176335.
[39] L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 8, no. 1, p. 53, Mar. 2021, doi: 10.1186/s40537-021-00444-8.
[40] F. Alrasheedi, X. Zhong, and P. C. Huang, “Padding Module: Learning the Padding in Deep Neural Networks,” IEEE Access, vol. 11, no. December 2022, pp. 7348–7357, 2023, doi: 10.1109/ACCESS.2023.3238315.
[41] H. J. Jie and P. Wanda, “Runpool: A dynamic pooling layer for convolution neural network,” Int. J. Comput. Intell. Syst., vol. 13, no. 1, pp. 66–76, 2020, doi: 10.2991/ijcis.d.200120.002.
[42] A. Zafar et al., “A Comparison of Pooling Methods for Convolutional Neural Networks,” Appl. Sci., vol. 12, no. 17, pp. 1–21, 2022, doi: 10.3390/app12178643.
[43] E. Oostwal, M. Straat, and M. Biehl, “Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation,” Phys. A Stat. Mech. its Appl., vol. 564, p. 125517, 2021, doi: 10.1016/j.physa.2020.125517.
[44] T. Mao and D. X. Zhou, “Rates of approximation by ReLU shallow neural networks,” J. Complex., vol. 79, p. 101784, 2023, doi: 10.1016/j.jco.2023.101784.
[45] X. Liu, S. Li, X. Zheng, and M. Lin, “Development of a flattening system for sheet metal with free-form surface,” Adv. Mech. Eng., vol. 8, no. 2, pp. 1–12, 2016, doi: 10.1177/1687814016630517.
[46] A. Gurov, E. Evmenova, and P. Chunaev, “Supervised community detection in multiplex networks based on layers convex flattening and modularity optimization,” Procedia Comput. Sci., vol. 212, no. C, pp. 181–190, 2022, doi: 10.1016/j.procs.2022.11.002.
[47] H. Ma, J. Zhang, J. Zhou, X. Zhai, J. Xue, and H. Ji, “Method for constructing multi-dimensional feature map of malicious code,” J. Phys. Conf. Ser., vol. 1748, no. 4, 2021, doi: 10.1088/1742-6596/1748/4/042055.
[48] Y. Tang et al., “Beyond dropout: Feature map distortion to regularize deep neural networks,” AAAI 2020 - 34th AAAI Conf. Artif. Intell., pp. 5964–5971, 2020, doi: 10.1609/aaai.v34i04.6057.
[49] B. Shah and H. Bhavsar, “Time Complexity in Deep Learning Models,” Procedia Comput. Sci., vol. 215, no. 2022, pp. 202–210, 2022, doi: 10.1016/j.procs.2022.12.023.
[50] Q. Jodelet, X. Liu, and T. Murata, “Balanced softmax cross-entropy for incremental learning with and without memory,” Comput. Vis. Image Underst., vol. 225, no. January, p. 103582, 2022, doi: 10.1016/j.cviu.2022.103582.
[51] A. Mumuni and F. Mumuni, “Data augmentation: A comprehensive survey of modern approaches,” Array, vol. 16, no. August, p. 100258, 2022, doi: 10.1016/j.array.2022.100258.
[52] C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, p. 60, Dec. 2019, doi: 10.1186/s40537-019-0197-0.
[53] N. A. M. Roslan, N. M. Diah, Z. Ibrahim, Y. Munarko, and A. E. Minarno, “Automatic plant recognition using convolutional neural network on Malaysian medicinal herbs: the value of data augmentation,” Int. J. Adv. Intell. Informatics, vol. 9, no. 1, pp. 136–147, 2023, doi: 10.26555/ijain.v9i1.1076.
[54] F. D. Adhinata, N. A. F. Tanjung, W. Widayat, G. R. Pasfica, and F. R. Satura, “Comparative Study of VGG16 and MobileNetV2 for Masked Face Recognition,” J. Ilm. Tek. Elektro Komput. dan Inform., vol. 7, no. 2, pp. 230–237, Jul. 2021, doi: 10.26555/JITEKI.V7I2.20758.
[55] P. Gayathri, A. Dhavileswarapu, S. Ibrahim, R. Paul, and R. Gupta, “Exploring the Potential of VGG-16 Architecture for Accurate Brain Tumor Detection Using Deep Learning,” J. Comput. Mech. Manag., vol. 2, no. 2, pp. 1–10, 2023, doi: 10.57159/gadl.jcmm.2.2.23056.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571 (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
andri.pranolo.id@ieee.org (publication issues)
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

























Download