Japanese sign language classification based on gathered images and neural networks

(1) * Shin-ichi Ito Mail (Tokushima University, Japan)
(2) Momoyo Ito Mail (Tokushima University, Japan)
(3) Minoru Fukumi Mail (Tokushima University, Japan)
*corresponding author

Abstract


This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words.

Keywords


Japanese sign language; Gathered image; Mean image; Convolutional neural network

   

DOI

https://doi.org/10.26555/ijain.v5i3.406
      

Article metrics

Abstract views : 1675 | PDF views : 253

   

Cite

   

Full Text

Download

References


[1] K. M. Lim, A. W. C. Tan, and S. C. Tan, “Block-based histogram of optical flow for isolated sign language recognition,” J. Vis. Commun. Image Represent., vol. 40, pp. 538–545, Oct. 2016, doi: 10.1016/j.jvcir.2016.07.020.

[2] K. Li, Z. Zhou, and C.-H. Lee, “Sign Transition Modeling and a Scalable Solution to Continuous Sign Language Recognition for Real-World Applications,” ACM Trans. Access. Comput., vol. 8, no. 2, pp. 1–23, Jan. 2016, doi: 10.1145/2850421.

[3] R. Cui, H. Liu, and C. Zhang, “A Deep Neural Framework for Continuous Sign Language Recognition by Iterative Training,” IEEE Trans. Multimed., vol. 21, no. 7, pp. 1880–1891, Jul. 2019, doi: 10.1109/TMM.2018.2889563.

[4] A. Sutherland, G. Awad, and J. Han, “Boosted subunits: a framework for recognising sign language from videos,” IET Image Process., vol. 7, no. 1, pp. 70–80, Feb. 2013, doi: 10.1049/iet-ipr.2012.0273.

[5] M. Elpeltagy, M. Abdelwahab, M. E. Hussein, A. Shoukry, A. Shoala, and M. Galal, “Multi-modality-based Arabic sign language recognition,” IET Comput. Vis., vol. 12, no. 7, pp. 1031–1039, Oct. 2018, doi: 10.1049/iet-cvi.2017.0598.

[6] P. V. V. Kishore, D. A. Kumar, A. S. C. S. Sastry, and E. K. Kumar, “Motionlets Matching With Adaptive Kernels for 3-D Indian Sign Language Recognition,” IEEE Sens. J., vol. 18, no. 8, pp. 3327–3337, Apr. 2018, doi: 10.1109/JSEN.2018.2810449.

[7] X. Yang, X. Chen, X. Cao, S. Wei, and X. Zhang, “Chinese Sign Language Recognition Based on an Optimized Tree-Structure Framework,” IEEE J. Biomed. Heal. Informatics, vol. 21, no. 4, pp. 994–1004, Jul. 2017, doi: 10.1109/JBHI.2016.2560907.

[8] J. Jimenez, A. Martin, V. Uc, and A. Espinosa, “Mexican Sign Language Alphanumerical Gestures Recognition using 3D Haar-like Features,” IEEE Lat. Am. Trans., vol. 15, no. 10, pp. 2000–2005, Oct. 2017, doi: 10.1109/TLA.2017.8071247.

[9] M. Jebali, P. Dalle, and M. Jemni, “Extension of Hidden Markov Model for Recognizing Large Vocabulary of Sign Language,” Int. J. Artif. Intell. Appl., vol. 4, no. 2, pp. 35–44, Mar. 2013, doi: 10.5121/ijaia.2013.4203.

[10] V. Ranga, N. Yadav, and P. Garg, “American Sign Language Fingerspelling Using Hybrid Discrete Wavelet Transform-Gabor Filter and Convolutional Neural Network,” J. Eng. Sci. Technol., vol. 13, no. 9, pp. 2655–2669, 2018, available at : Google Scholar.

[11] Y. Zhou, G. Jiang, and Y. Lin, “A novel finger and hand pose estimation technique for real-time hand gesture recognition,” Pattern Recognit., vol. 49, pp. 102–114, Jan. 2016, doi: 10.1016/j.patcog.2015.07.014.

[12] W. Tao, M. C. Leu, and Z. Yin, “American Sign Language alphabet recognition using Convolutional Neural Networks with multiview augmentation and inference fusion,” Eng. Appl. Artif. Intell., vol. 76, pp. 202–213, Nov. 2018, doi: 10.1016/j.engappai.2018.09.006.

[13] O. Al-Jarrah and A. Halawani, “Recognition of gestures in Arabic sign language using neuro-fuzzy systems,” Artif. Intell., vol. 133, no. 1–2, pp. 117–138, Dec. 2001, doi: 10.1016/S0004-3702(01)00141-2.

[14] K. Silanon, “Thai Finger-Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features,” Comput. Intell. Neurosci., vol. 2017, pp. 1–11, 2017, doi: 10.1155/2017/9026375.

[15] S. Phitakwinai, S. Auephanwiriyakul, and N. Theera-Umpon, “Thai Sign Language Translation Using Fuzzy C-Means and Scale Invariant Feature Transform,” pp. 1107–1119, doi: 10.1007/978-3-540-69848-7_88.

[16] T. Pariwat and P. Seresangtakul, “Thai finger-spelling sign language recognition using global and local features with SVM,” in 2017 9th International Conference on Knowledge and Smart Technology (KST), 2017, pp. 116–120, doi: 10.1109/KST.2017.7886111.

[17] L. Pigou, S. Dieleman, P.-J. Kindermans, and B. Schrauwen, “Sign Language Recognition Using Convolutional Neural Networks,” 2015, pp. 572–578, doi: 10.1007/978-3-319-16178-5_40.

[18] P. Molchanov, S. Gupta, K. Kim, and J. Kautz, “Hand gesture recognition with 3D convolutional neural networks,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2015, pp. 1–7, doi: 10.1109/CVPRW.2015.7301342.

[19] N. Mukai, N. Harada, and Y. Chang, “Japanese Fingerspelling Recognition Based on Classification Tree and Machine Learning,” in 2017 Nicograph International (NicoInt), 2017, pp. 19–24, doi: 10.1109/NICOInt.2017.9.

[20] N. Takayama and H. Takahashi, “Sign Words Annotation Assistance Using Japanese Sign Language Words Recognition,” in 2018 International Conference on Cyberworlds (CW), 2018, pp. 221–228, doi: 10.1109/CW.2018.00048.

[21] G. A. Rao, K. Syamala, P. V. V. Kishore, and A. S. C. S. Sastry, “Deep convolutional neural networks for sign language recognition,” in 2018 Conference on Signal Processing And Communication Engineering Systems (SPACES), 2018, pp. 194–197, doi: 10.1109/SPACES.2018.8316344.

[22] R. Yang, S. Sarkar, and B. Loeding, “Handling Movement Epenthesis and Hand Segmentation Ambiguities in Continuous Sign Language Recognition Using Nested Dynamic Programming,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 3, pp. 462–477, Mar. 2010, doi: 10.1109/TPAMI.2009.26.

[23] C.-H. Wu, Y.-H. Chiu, and C.-S. Guo, “Text Generation From Taiwanese Sign Language Using a PST-Based Language Model for Augmentative Communication,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 12, no. 4, pp. 441–454, Dec. 2004, doi: 10.1109/TNSRE.2003.819930.

[24] C. Sun, T. Zhang, B.-K. Bao, C. Xu, and T. Mei, “Discriminative Exemplar Coding for Sign Language Recognition With Kinect,” IEEE Trans. Cybern., vol. 43, no. 5, pp. 1418–1428, Oct. 2013, doi: 10.1109/TCYB.2013.2265337.

[25] H.-D. Yang, S. Sclaroff, and S.-W. Lee, “Sign Language Spotting with a Threshold Model Based on Conditional Random Fields,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 7, pp. 1264–1277, Jul. 2009, doi: 10.1109/TPAMI.2008.172.

[26] M. Mohandes, M. Deriche, and J. Liu, “Image-Based and Sensor-Based Approaches to Arabic Sign Language Recognition,” IEEE Trans. Human-Machine Syst., vol. 44, no. 4, pp. 551–557, Aug. 2014, doi: 10.1109/THMS.2014.2318280.

[27] T.-H. S. Li, M.-C. Kao, and P.-H. Kuo, “Recognition System for Home-Service-Related Sign Language Using Entropy-Based K-Means Algorithm and ABC-Based HMM,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 46, no. 1, pp. 150–162, Jan. 2016, doi: 10.1109/TSMC.2015.2435702.

[28] Yun Li, Xiang Chen, Xu Zhang, Kongqiao Wang, and Z. J. Wang, “A Sign-Component-Based Framework for Chinese Sign Language Recognition Using Accelerometer and sEMG Data,” IEEE Trans. Biomed. Eng., vol. 59, no. 10, pp. 2695–2704, Oct. 2012, doi: 10.1109/TBME.2012.2190734.

[29] W. Akutsu, T. Furuya, H. N. Miyamura, and T. Saito, “Hierarchical Image Gathering Technique for Browsing Surveillance Camera Images,” pp. 383–390, doi: 10.1007/978-3-540-73345-4_44.

[30] S. Ito, K. Orihashi, M. Ito, and M. Fukumi, “A Gathered Images Analysis Method to Evaluate Sound Sleep,” J. Inst. Ind. Appl. Eng., vol. 7, no. 1, pp. 16–24, Jan. 2019, doi: 10.12792/JIIAE.7.16.

[31] S. Ito, M. Ito, and M. Fukumi, “A Method of Classifying Japanese Sign Language using Gathered Image Generation and Convolutional Neural Networks,” in 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), 2019, pp. 868–871, doi: 10.1109/DASC/PiCom/CBDCom/CyberSciTech.2019.00157.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0