Fish species recognition using transfer learning techniques

(1) * Jaisakthi Seetharani Murugaiyan Mail (Vellore Institute of Technology, India)
(2) Mirunalini Palaniappan Mail (SSN College of Engineering, India)
(3) Thenmozhi Durairaj Mail (SSN College of Engineering, India)
(4) Vigneshkumar Muthukumar Mail (SSN College of Engineering, India)
*corresponding author

Abstract


Marine species recognition is the process of identifying various species that help in population estimation and identifying the endangered types for taking further remedies and actions. The superior performance of deep learning for classification is due to the property of estimating millions of parameters that have to be extracted from many annotated datasets. However, many types of fish species are becoming extinct, which may reduce the number of samples. The unavailability of a large dataset is a significant hurdle for applying a deep neural network that can be overcome using transfer learning techniques. To overcome this problem, we propose a transfer learning technique using a pre-trained model that uses underwater fish images as input and applies a transfer learning technique to detect the fish species using a pre-trained Google Inception-v3 model. We have evaluated our proposed method on the Fish4knowledge(F4K) dataset and obtained an accuracy of 95.37%. The research would be helpful to identify fish existence and quantity for marine biologists to understand the underwater environment to encourage its preservation and study the behavior and interactions of marine animals.

Keywords


Fish classification; Transfer learning; SVM classifier; Deep neural network

   

DOI

https://doi.org/10.26555/ijain.v7i2.610
      

Article metrics

Abstract views : 1991 | PDF views : 370

   

Cite

   

Full Text

Download

References


[1] J. N. Fabic, I. E. Turla, J. A. Capacillo, L. T. David, and P. C. Naval, “Fish population estimation and species classification from underwater video sequences using blob counting and shape analysis,” in 2013 IEEE International Underwater Technology Symposium (UT), 2013, pp. 1–6, doi: 10.1109/UT.2013.6519876.

[2] A. Salman et al., “Fish species classification in unconstrained underwater environments based on deep learning,” Limnol. Oceanogr. Methods, vol. 14, no. 9, pp. 570–585, Sep. 2016, doi: 10.1002/lom3.10113.

[3] M. R. Heithaus and L. M. Dill, “Food availability and tiger shark predation risk influence bottlenose dolphin habitat use,” Ecology, vol. 83, no. 2, pp. 480–491, 2002, doi: 10.1890/0012-9658(2002)083[0480:FAATSP]2.0.CO;2.

[4] C. Szegedy et al., “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9, doi: 10.1109/CVPR.2015.7298594.

[5] S. Villon, M. Chaumont, G. Subsol, S. Villéger, T. Claverie, and D. Mouillot, “Coral Reef Fish Detection and Recognition in Underwater Videos by Supervised Machine Learning: Comparison Between Deep Learning and HOG+SVM Methods,” 2016, pp. 160–171. doi: 10.1007/978-3-319-48680-2_15

[6] X. Wang, J. Ouyang, D. Li, and G. Zhang, “Underwater Object Recognition Based on Deep Encoding-Decoding Network,” J. Ocean Univ. China, vol. 18, no. 2, pp. 376–382, Apr. 2019, doi: 10.1007/s11802-019-3858-x.

[7] G. Liang, F. Chen, Y. Liang, Y. Feng, C. Wang, and X. Wu, “A Manufacturing-Oriented Intelligent Vision System Based on Deep Neural Network for Object Recognition and 6D Pose Estimation,” Front. Neurorobot., vol. 14, Jan. 2021, doi: 10.3389/fnbot.2020.616775.

[8] J. Hu, D. Zhao, Y. Zhang, C. Zhou, and W. Chen, “Real-time nondestructive fish behavior detecting in mixed polyculture system using deep-learning and low-cost devices,” Expert Syst. Appl., vol. 178, p. 115051, Sep. 2021, doi: 10.1016/j.eswa.2021.115051.

[9] D.-J. Lee, R. B. Schoenberger, D. Shiozawa, X. Xu, and P. Zhan, “Contour matching for a fish recognition and migration-monitoring system,” in Two-and Three-Dimensional Vision Systems for Inspection, Control, and Metrology II, 2004, vol. 5606, pp. 37–48. doi: 10.1117/12.571789

[10] R. Larsen, H. Olafsdottir, and B. K. Ersbøll, “Shape and Texture Based Classification of Fish Species,” 2009, pp. 745–749. doi: 10.1007/978-3-642-02230-2_76

[11] N. J. C. Strachan, “Length measurement of fish by computer vision,” Comput. Electron. Agric., vol. 8, no. 2, pp. 93–104, Mar. 1993, doi: 10.1016/0168-1699(93)90009-P.

[12] Y.-H. Hsiao, C.-C. Chen, S.-I. Lin, and F.-P. Lin, “Real-world underwater fish recognition and identification, using sparse representation,” Ecol. Inform., vol. 23, pp. 13–21, Sep. 2014, doi: 10.1016/j.ecoinf.2013.10.002.

[13] Y.-H. Shiau, F.-P. Lin, and C.-C. Chen, “Using sparse representation for fish recognition and verification in real world observation,” in Int. Workshop Visual Observ. Anal. Animal Insect Behav.(VAIB), conjunction Int. Conf. Pattern Recognit, 2012. Available at: Google Scholar.

[14] S. Hasija, M. J. Buragohain, and S. Indu, “Fish Species Classification Using Graph Embedding Discriminant Analysis,” in 2017 International Conference on Machine Vision and Information Technology (CMVIT), 2017, pp. 81–86, doi: 10.1109/CMVIT.2017.23.

[15] A. Joly et al., “LifeCLEF 2015: Multimedia Life Species Identification Challenges,” 2015, pp. 462–483. doi: 10.1007/978-3-319-24027-5_46

[16] F. Shafait et al., “Fish identification from videos captured in uncontrolled underwater environments,” ICES J. Mar. Sci. J. du Cons., vol. 73, no. 10, pp. 2737–2746, Nov. 2016, doi: 10.1093/icesjms/fsw106.

[17] M. T. A. Rodrigues, F. L. C. Padua, R. M. Gomes, and G. E. Soares, “Automatic fish species classification based on robust feature extraction techniques and artificial immune systems,” in 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010, pp. 1518–1525, doi: 10.1109/BICTA.2010.5645273.

[18] M.-C. Chuang, J.-N. Hwang, and K. Williams, “A Feature Learning and Object Recognition Framework for Underwater Fish Images,” IEEE Trans. Image Process., pp. 1–1, 2016, doi: 10.1109/TIP.2016.2535342.

[19] H. Qin, X. Li, J. Liang, Y. Peng, and C. Zhang, “DeepFish: Accurate underwater live fish recognition with a deep architecture,” Neurocomputing, vol. 187, pp. 49–58, Apr. 2016, doi: 10.1016/j.neucom.2015.10.122.

[20] S. A. Siddiqui et al., “Automatic fish species classification in underwater videos: exploiting pre-trained deep neural network models to compensate for limited labelled data,” ICES J. Mar. Sci., vol. 75, no. 1, pp. 374–389, Jan. 2018, doi: 10.1093/icesjms/fsx109.

[21] P. X. Huang, B. J. Boom, and R. B. Fisher, “Underwater Live Fish Recognition Using a Balance-Guaranteed Optimized Tree,” 2013, pp. 422–433. doi: 10.1007/978-3-642-37331-2_32

[22] M. K. Alsmadi, K. B. Omar, and S. A. M. Noah, “Fish classification based on robust features extraction from color signature using backpropagation classifier,” J. Comput. Sci., vol. 7, no. 1, p. 52, 2011. doi: 10.3844/jcssp.2011.52.58

[23] R. Raj, N. D. Londhe, and R. Sonawane, “Automated psoriasis lesion segmentation from unconstrained environment using residual U-Net with transfer learning,” Comput. Methods Programs Biomed., vol. 206, p. 106123, Jul. 2021, doi: 10.1016/j.cmpb.2021.106123.

[24] Y. Ren, X. Chen, S. Wan, K. Xie, and K. Bian, “Passenger Flow Prediction in Traffic System Based on Deep Neural Networks and Transfer Learning Method,” in 2019 4th International Conference on Intelligent Transportation Engineering (ICITE), 2019, pp. 115–120, doi: 10.1109/ICITE.2019.8880220.

[25] J. Duan, Y. He, and X. Wu, “Serial transfer learning (STL) theory for processing data insufficiency: Fault diagnosis of transformer windings,” Int. J. Electr. Power Energy Syst., vol. 130, p. 106965, Sep. 2021, doi: 10.1016/j.ijepes.2021.106965.

[26] N. Hernandez, J. Lundström, J. Favela, I. McChesney, and B. Arnrich, “Literature Review on Transfer Learning for Human Activity Recognition Using Mobile and Wearable Devices with Environmental Technology,” SN Comput. Sci., vol. 1, no. 2, p. 66, Mar. 2020, doi: 10.1007/s42979-020-0070-4.

[27] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818–2826, doi: 10.1109/CVPR.2016.308.

[28] B. J. Boom, P. X. Huang, J. He, and R. B. Fisher, “Supporting ground-truth annotation of image datasets using clustering,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), 2012, pp. 1542–1545. Available at: Google Scholar.

[29] L. E. Greene and W. S. Alevizon, “Comparative accuracies of visual assessment methods for coral reef fishes,” Bull. Mar. Sci., vol. 44, no. 2, pp. 899–912, 1989. Available at: Google Scholar.

[30] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, pp. 1–27, Apr. 2011, doi: 10.1145/1961189.1961199.




Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Advances in Intelligent Informatics
ISSN 2442-6571  (print) | 2548-3161 (online)
Organized by UAD and ASCEE Computer Society
Published by Universitas Ahmad Dahlan
W: http://ijain.org
E: info@ijain.org (paper handling issues)
   andri.pranolo.id@ieee.org (publication issues)

View IJAIN Stats

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0